Introduction

Posttraumatic stress disorder (PTSD), a psychiatric disorder that can result from exposure to a traumatic event such as war, assault, or natural disaster (American Psychiatric Association 2013), is particularly prevalent among veterans. The overall prevalence of lifetime PTSD among veterans in the general population is 6.9% (Smith et al. 2016). Among veterans from the wars in Afghanistan and Iraq who use Veterans Health Administration (VHA) medical care, over one in four carry a PTSD diagnosis (Harpaz-Rotem and Hoff 2018). Left untreated, PTSD can result in significant impairments across multiple domains of functioning (Sayer et al. 2010; Zatzick et al. 1997). Fortunately, evidence-based treatments are available.

Trauma-focused psychotherapies are recommended as the most effective treatment across PTSD Clinical Practice Guidelines (Hamblen et al. 2019). Based on systematic reviews of the literature (Lee et al. 2016), several of these guidelines, including the 2017 Department of Veterans Affairs (VA)/Department of Defense (DoD) Clinical Practice Guideline for PTSD (Department of Veterans Affairs/Department of Defense 2017) recommend trauma-focused psychotherapies over recommended medications. Over the past decade, VHA has disseminated two trauma-focused psychotherapies for PTSD—Cognitive Processing Therapy (CPT) (Resick et al. 2014) and Prolonged Exposure (PE) (Foa et al. 2007). The Evidence-Based Psychotherapy (EBP) dissemination initiative includes competency-based training of thousands of mental health clinicians, policy requirements to make these treatments available at all facilities (U.S. Department of Veterans Affairs 2008), the designation of local EBP champions for each facility, and national PTSD consultation and mentoring programs to support clinicians (Bernardy et al. 2011; Karlin and Cross 2014). VHA performance measures for PTSD have been adapted to credit facilities for use of EBPs for PTSD (Department of Veterans Affairs 2020).

Despite the considerable resources devoted to ensuring veterans with PTSD have access to CPT and PE, converging evidence indicates that reach of these EBPs to veterans with PTSD is generally low (Maguen et al. 2018; Rosen et al. 2016). Intervention reach, one metric of the public health impact of a health promotion initiative (Glasgow et al. 1999), can be considered in terms of the percent and representativeness of individuals within a defined population who receive the target intervention. Among veterans who received psychotherapy for PTSD in 2015, 8.5% received at least one session documented by an EBP template in VHA’s electronic medical record (Sripada et al. 2018). VHA has established a system of PTSD specialty care programs to ensure that veterans with PTSD receive high quality disorder-specific care (Department of Veterans Affairs 2017). However, there is considerable variation in CPT and PE reach across the VHA’s specialized outpatient PTSD programs. In many PTSD programs, only a small proportion of therapy patients with PTSD received one session documented by an EBP template, whereas in other PTSD programs, more than half of all therapy patients with PTSD had a templated CPT or PE note (Sayer et al. 2017).

Prior work (Promoting Effective, Routine and Sustained Implementation of Stress Treatments (PERSIST)) has examined team and organizational factors that differentiated PTSD clinics that had high reach from PTSD clinics that had low reach of EBPs for PTSD (Sayer et al. 2017). High- and low-reach PTSD clinics differed along five themes—team mission, staff engagement, clinic operations, staff perceptions, and the broader practice environment. High-reach PTSD clinics were configured around a short-term EBP treatment model embodied in a team mission and had developed strategies to engage staff and operations to facilitate EBP delivery. In low-reach clinics, in contrast, EBPs seemed to be largely grafted onto clinic structures and processes that were not tailored to EPBs. High reach clinics also had an internal leader with the combination of authority, motivation, and knowledge to secure support from facility leadership in the EBP-focused mission, spearhead EBP-congruent changes, and overcome local implementation obstacles; low reach clinics did not.

We designed an implementation intervention to help PTSD clinics reconfigure along the five dimensions listed above to function as high-reach clinics. The implementation intervention involved external facilitation (Stetler et al. 2006a) guided by a toolkit that bundled strategies used by high-reach clinics with tools to enact these strategies. This approach was informed by the Promoting Action on Research Implementation in Health Services (PARIHS) (Kitson et al. 2008; Stetler et al. 2011) framework, which specifies that successful implementation of research evidence is a function of evidence, context, and facilitation. Specifically, successful implementation will occur when evidence is robust and providers buy into it, the context is receptive, and implementation processes are appropriately facilitated by internal and/or external facilitators (Kitson et al. 2008). The toolkit-guided external facilitation was designed to influence the inner (e.g., clinic operations) and outer (e.g., facility support for the clinic’s mission and operations) context for EBP delivery in order to increase the reach of EBPs to more patients with PTSD.

This article describes evaluation of toolkit-guided external facilitation in the context of a quality improvement project. The primary objective was to determine whether toolkit-guided external facilitation improved EBP reach in PTSD clinics that were not using EBPs for PTSD regularly. We expected the magnitude of improvement in EBP reach to be greater in PTSD clinics that received toolkit-guided external facilitation compared with PTSD clinics at matched control sites. The secondary objective was to show that improvement in EBP reach in the PTSD clinics would not be offset by decrements in EBP reach in other mental health clinics due to reallocation of workload.The third objective was to describe the changes PTSD teams made due to toolkit-guided external facilitation and thereby illuminate how the intervention worked.

Method

This was a prospective, quasi experimental pre-post nonequivalent control groups design. The Revised Standards for Quality Improvement Reporting Excellence (SQUIRE 2.0) guidelines provided the framework for this article (Ogrinc et al. 2016).

Context and Site Selection

The intervention targeted specialized outpatient PTSD clinics, which play a central role in delivery of disorder specific care in VHA. We purposefully selected two PTSD clinics that met the following inclusion criteria: (a) need for intervention, as determined by having low EBP reach, (b) presence of a champion for EBP delivery willing to serve as internal change agent (Damschroder et al. 2009; Ritchie et al. 2017), and (c) facility leadership support for the project. We operationalized low EBP reach as PTSD clinics that provided EBPs for PTSD to 15% or fewer of their patients with PTSD who received psychotherapy. We selected 15% as the cut point because it was the median of EBP reach across VHA 128 specialized outpatient PTSD clinics during a 12-month period before project start.

Two out of the 64 low-reach PTSD clinics were recruited through VHA’s PTSD mentoring program for PTSD clinics (Bernardy et al. 2011). Three control PTSD clinics with low reach were matched to each of the selected intervention clinics (which also meant the inclusion of the matched sites’ other mental health clinics). To approximately satisfy the assumption of independence conditional on past outcomes, an alternative to the parallel trend assumption (O’Neill et al. 2016), each intervention site was matched to control sites based on the pre-treatment (calendar year 2017) value of our primary outcome (EBP reach in the PTSD clinic) and the following covariates: number of therapy patients with PTSD seen in the PTSD clinic, number of patients with PTSD seen in the facility and composition of these patients in terms of age, gender race, and ethnicity. Multivariate matching was done in a hierarchical manner. At each step of the matching process, the closest sites to each intervention site were selected using the nearest neighbor method. Because each intervention site had a unique combination of facility-level matching variables, the hierarchical order for matching was slightly different for each intervention site, with the pre-treatment PTSD clinic characteristics prioritized. The order of entry of the matching variables was chosen to improve the possibility of obtaining well-matched control to intervention sites. Matching was performed using the Matchit R Package. Supplemental Table 1 details the distribution summary statistics of the variables used for matching by site, as well as the order in which they were used. Matching occurred before intervention; the implementation team was blind as to the identity of the matched sites.

Table 1 External facilitation activities and corresponding implementation strategy classification

The Implementation Intervention

The implementation intervention was external facilitation (Stetler et al. 2006a) guided by a toolkit that bundled tools and resources used by high-reach clinics to develop a mission statement centered around delivery of EBPs, build leader and staff engagement in this mission, develop clinic procedures that facilitate EBP delivery, and foster positive staff perceptions and facility leadership support for the team mission (Sayer et al. 2017). In short, the toolkit was designed to help the external facilitator guide the low-reach clinics to develop the organizational features characteristic of high reach clinics.

External facilitation involves having an individual from outside the site who is an expert in implementation strategies and tools and who has credible knowledge about the clinical innovation interact with the target site to enact changes (Ritchie et al. 2017). This project’s external facilitator was a VHA staff member who already had formal training and experience with facilitation, experience consulting with PTSD clinics through the PTSD mentoring program, and clinical experience delivering EBPs. Time to work on the project was built into the position held by the external facilitator. The external facilitator worked closely with a local champion who was charged with helping their PTSD clinic enact team-level changes. The external facilitator received support and assistance with problem-solving from the project implementation team and, as needed, external consultants in EBPs and/or VHA mental health policies.

The external facilitator used multiple implementation strategies while supporting and enabling the site’s champion. Planned facilitation activities corresponded to 19 of the 73 implementation strategies described by Powell et al. (Powell et al. 2015) (Table 1). Six months of intensive facilitation began with a site visit to engage stakeholders, including leadership, and develop a site-specific Implementation Planning Guide that served as the site’s implementation blueprint (Ritchie et al. 2017). The external facilitator had contact with the champions in the months before the site visit to assess implementation barriers and facilitators and select stakeholders for the site visit. Both the assessment of barriers and facilitators and the Implementation Planning Guide were organized around the five PERSIST themes (mission, engagement, operations, perceptions, broader practice environment) (Sayer et al. 2017).

After the site visit, the external facilitator and local champion scheduled weekly 60-min phone calls and communicated by email as needed. During these contacts the facilitator and champion reviewed progress, which the external facilitator documented, and the external facilitator provided support, technical assistance, education, and coaching of the champion. Material for technical assistance and education were drawn from the toolkit. External facilitation took place between June 2018 and January 2019 and between July and February 2019 for intervention sites 1 and 2, respectively. After 6 months of facilitation (intervention period), the external facilitator scheduled four brief check in-calls and remained available for consultation or coaching for an additional 6 months (maintenance period), as requested by the site champions. Throughout the intervention and maintenance periods, the external facilitator provided monthly reports to monitor the clinic and associated providers’ use of EBPs.

The six control sites received usual assistance and support for EBP delivery which included access to: (1) national and regional competency-based training in CPT and PE, (2) local EBP champions to support efforts to improve EBP uptake, (3) consultation and mentoring in EBP delivery through national programs, (4) dashboard for tracking EBP use, (5) National Center for PTSD shared decision making tools, and (6) patient educational material specific to EBPs for PTSD.

Measures

Figure 1 presents a diagram of the evaluation design, including the timing of the two evaluation periods (pre-intervention, baseline; post-intervention, maintenance). The 6-month baseline period covered October 20, 2017 through April 17, 2018 for site 1 and November 27, 2017 through May 25, 2018 for site 2. The 6-month post-intervention maintenance period covered December 18, 2018 through June 15, 2019 for site 1 and January 26, 2019 through July 24, 2019.

Fig. 1
figure 1

Evaluation timeline

The data source for measures to address objectives 1 and 2 was VHA’s repository of clinical and administrative data, available through the Corporate Data Warehouse (CDW). Formative evaluation data was used to address objective 3.

Outcome Measures

The primary outcome was EBP reach to therapy patients seen in the PTSD clinic at intervention and control sites. Our secondary outcome was EBP reach in other mental health clinics. To calculate EBP reach, we used CDW data to identify all patients at the implementation and control sites who had psychotherapy for PTSD as outpatients in PTSD clinics or any other mental health clinic within the site. Each psychotherapy visit was classified as CPT, PE, or “other” using variables automatically generated by structured EBP templates. Using a natural language processing algorithm that we had previously developed (Sayer et al. 2017) we found that the EBP templates were identifying 96% of EBPs for PTSD at the eight performance sites during the two evaluation periods.

We operationalized EBP reach as the proportion of veterans diagnosed with PTSD during a psychotherapy appointment who receive at least one CPT or PE session within the 6 months pre- and post-intervention. This approach is consistent with prior research (Sayer et al. 2017; Sripada et al. 2018). Because the external facilitator began informally working with the local champion in the months preceding the site visit, we excluded the two months before intervention from the pre-intervention assessment period.

Other Measures

We extracted the following variables for patients with PTSD who received psychotherapy at intervention and control sites during the two 6-month evaluation periods: age, gender, race, ethnicity, marital status, period of military service, driving distance from home to the VHA facility, psychiatric hospitalization, VA disability status for PTSD, and type and number of psychiatric comorbidities in addition to PTSD.

Because EBPs are time-intensive compared with unstructured therapies, staffing is a clinic-level variable that may affect providers’ use of EBPs (Finley et al. 2015).We used the provider identification numbers associated with completed appointments in the CDW to calculate the number of providers working in PTSD and other mental health clinics. We then constructed a measure of clinic staffing by dividing the number of patients with PTSD seen by team members for any treatment in the evaluation periods by the number of clinic providers, as done in prior research (Mohr et al. 2018), and included staffing as a covariate. This measure was constructed separately for PTSD and other mental health clinics.

Formative Evaluation

The formative evaluation included four stages—developmental, implementation-focused, progress-focused, and interpretive (Stetler et al. 2006b). Data collected for each stage are listed in Table 2. For the interpretive evaluation included this article, post-intervention semi-structured interviews were conducted with 13 key informants at the two intervention sites, two of whom were interviewed a second time after the maintenance period. Structured templates, based on an interview guide, were used to summarize interview notes. Using matrix analytic techniques, we developed a matrix summarizing interview findings for each site. Implementation Planning Guides and site worksheets were integrated into the final site matrices.

Table 2 Formative evaluation data sources by formative evaluation stage

Data Analysis

To identify possible imbalances in patient characteristics across intervention and control sites during our evaluation periods, we used Pearson’s or Kruskal–Wallis Chi-square tests, depending on the variable type and distribution. Driving distance was the only patient characteristic with missing values. Missing driving distance was multiply imputed (25 copies) and multiple imputation analysis was conducted using the Mice R package.

We used difference-in-difference (DID) effect estimation (Dimick and Ryan 2014) to compare changes in EBP reach in PTSD and other mental health clinics in the intervention and control sites. The SE for the DID took into account that the DID had both an independent and a dependent component since some patients were seen in only one and others were seen in both evaluation periods.

To incorporate the possible clustering effect due to the matching design and repeated measurements, and to adjust for imbalanced covariates (p < 0.05) after matching, we used mixed effects logistic regression to model the probability of patients’ receipt of an EBP (yes or no). The mixed effects logistic regression models comprised of condition (intervention or control), time (pre or post), and condition by time as the fixed effects, and matched sets, sites within matched sets, and intercept as the random effects to account for clustering due to matching and site membership in each set. If the interaction term was significant, we presented the simple effects as an odds ratio of receiving an EBP (post over pre) for each type of site (intervention or control) and the 95% Confidence Interval (CI). The ratio of the odds ratios (DID estimate in log scale) was estimated from the exponentiated interaction coefficient. Analyses were preformed using SAS® 9.4 and R version 3.5.1.

Power Estimate

Power calculation was based on a comparison of proportions in a cluster designed study as implemented by PASS 13. We had 85.6% power to detect a 5% DID in EBP reach between the intervention and control sites assuming an intra-cluster correlation of 0.002, average cluster size of 675 patients, and reach at the matched sites to be 10%.

Ethics Statement

The Minneapolis VA Health Care System IRB reviewed this project and determined that the activities involved did not meet the definition of research. This project is designed for internal VHA purposes in support of the VA mission and findings are to be used within VHA to improve processes of care. All data collected pertain to quality improvement activities. Our primary operational partner (VHA’s Office of Mental Health and Suicide Prevention) provided documentation that the project involved non-research operations activities.

Results

Characteristics of patients seen during the evaluation periods by condition are presented in Table 3. As can be seen, 29,446 unique patients with PTSD received psychotherapy in the two intervention and six control sites over the two assessment periods, 8886 of whom received psychotherapy in the PTSD clinics and 20,580 of whom received psychotherapy in other mental health clinics. The imbalanced (p < 0.05) patient characteristics after site matching were age, race, gender, marital status, period of military service, driving distance, psychiatric hospitalization, VA disability status for PTSD, anxiety and substance use disorders. We did not include period of service in our adjusted models because it was highly correlated with age.

Table 3 Characteristics of patients with PTSD who received psychotherapy at intervention and control sites during the evaluation periods

Figure 2 displays the primary outcome during the two evaluation periods at each of the 8 sites. Table 4 presents results for changes in reach in intervention versus control sites in PTSD and other (non-PTSD specialty) mental health clinics. The proportion of therapy patents with PTSD who received an EBP increased by 16.98 percentage points in the intervention compared with 0.45 percentage points in the control PTSD clinics. The DID was 16.53% (SE = 2.26%) and highly significant (z = 9.18). There was no difference between the change in reach in other mental health clinics in the intervention and matched control sites.

Fig. 2
figure 2

EBP reach for PTSD clinics during the 6-month pre- and post-intervention periods at intervention and matched control sites

Table 4 Difference-in-difference between intervention and control sites from pre- to post-intervention

Table 5 shows results from the mixed logistic regression models that accounted for the matching design, imbalanced patient characteristics, and clinic staffing. The intervention PTSD clinics had a significant increase in the adjusted odds of a patient receiving an EBP (AOR 3.05, 95% CI 2.46–3.78) while there was no change in the adjusted odds of a patient receiving an EBP in the control PTSD clinics (AOR  1.03, 95% CI 0.88–1.22). Adjusting for imbalances after matching, staffing and the matching design, the odds ratio for a patient receiving an EBP from pre to post intervention was almost three times larger in the intervention than in the control PTSD clinics (RoR 2.90, 95% CI 2.22–3.80). For other mental health clinics, the adjusted model also showed a significant RoR, though for a different reason. While there was no change in the adjusted odds of a patient receiving an EBP in other mental health clinics at the intervention sites (AOR 1.18, 95% CI 0.97–1.44), there was a slight decrease in the adjusted odds of a patient receiving an EBP in other mental health clinics at the control sites (AOR 0.86, 95% CI 0.75–0.99).

Table 5 Estimated effect of intervention on receipt of an evidence-based psychotherapy for PTSD in PTSD and other mental health clinics

Figure 2 shows a larger increase from pre to post intervention in the proportion of therapy patients with PTSD receiving an EBP at site 1′s compared with site 2′s PTSD clinic. In sensitivity analyses we repeated the main analyses for each set of sites separately (Supplemental Tables 2 and 3). The DID for site 1′s PTSD clinics was 26.55% (SE = 2.73) and highly significant (z = 9.73). The DID for site 2′s PTSD clinics was 5.21% (SE = 3.88%) but not significant (z = 1.34). The improvement in the adjusted odds of a patient receiving an EBP from pre to post intervention was significant for both intervention but not the control sites’ PTSD clinics. However, the RoR was significant for site 1 (RoR 5.75, 95% CI 3.96–8.34) but not for site 2 (RoR 1.29, 95% CI 0.89–1.89).

Over the 6-month intervention period, the external facilitator had 14 phone calls and 15 email conversations with the local champion at site 1, and 13 phone calls and 10 email conversations with the local champion at site 2. Information on the changes the intervention PTSD clinics enacted informs understanding of how external facilitation effected the observed changes. As shown in Table 6, both intervention PTSD clinics made numerous changes in response to external facilitation, particularly in clinic operations, in the service of improving EBP reach to patients with PTSD. These changes mapped onto the widely-used implementation strategy classification system from the Expert Recommendations for Implementation Change (ERIC) project (Powell et al. 2015).

Table 6 PTSD clinic changes to improve EBP reach and corresponding implementation strategy

Discussion

This intervention addressed the problem that a decade into VHA’s CPT and PE dissemination initiatives, the majority of veterans diagnosed with PTSD were not receiving either of these proven treatments, even when seen in specialized outpatient PTSD programs (Maguen et al. 2018; Rosen et al. 2016; Sayer et al. 2017; Sripada et al. 2018). Overall, the intervention PTSD clinics improved by 16.53 percentage points more than control PTSD clinics after 6 months of toolkit guided external facilitation and no longer met criteria for the low-reach designation. We were heartened to see that improvements in reach in the PTSD clinic were not offset by reductions in EBP reach in other mental health clinics in the intervention sites as might be expected if EBP workload was just being rearranged across clinics rather than increased through the implementation intervention.

Although EBP reach more than doubled in the intervention PTSD clinics, an important question is whether the 28.39% reach level achieved represents the desired endpoint for EBP reach. VHA does not mandate that PTSD clinics meet a specified level of EBP reach and experts still debate how much VHA should prioritize EBPs over other treatments, given the complexity of military-related PTSD (Steenkamp et al. 2020). Furthermore, as described in clinical practice guidelines, patient preference should play a central role in PTSD treatment selection (Department of Veterans Affairs/Department of Defense 2017). The heterogeneity of treatment needs and preferences helps explain why on average even high reach PTSD clinics provide psychotherapies other than EBPs to more than half of their patients (Sayer et al. 2017). Taking this into consideration, we find it promising that more than one quarter of PTSD patients in intervention PTSD clinics received an EBP during the maintenance period. Longitudinal data, however, would allow for a more informed interpretation of the value of the observed EBP reach endpoint of 28.39%. For example, enthusiasm for toolkit-guided external facilitation would be strengthened if it were found that the intervention PTSD clinics maintained their gains or eventually became high reach clinics. On the other hand, enthusiasm would be dampened if intervention PTSD clinics were to revert to baseline, low-reach levels as time from intervention elapsed.

External facilitation is not always an effective implementation strategy. The reasons that external facilitation is effective in some (Kirchner et al. 2014) but not other (Harris et al. 2017) contexts is an important area of study. Specific to this project is the external facilitator’s use of a customized toolkit that provided a conceptual map of successful clinics to guide assessment of local barriers and facilitators, implementation planning, and the team-level changes they encouraged. We posit that providing an external facilitator with a roadmap for high reach PTSD clinics through the toolkit helped standardize the goals of external facilitation, even though the teams implemented different changes to realize these goals, and helped clinics to structure themselves to be more like these exemplary clinics. However, we also observed that the improvement in EBP reach was considerably larger in one intervention PTSD clinic compared with the other. An examination of the reasons for this difference could inform efforts to improve the intervention effectiveness and to match implementation strategies to clinic characteristics. While this project’s formative evaluation provides a rich assessment of plausible modifying factors, the small number of intervention clinics involved necessarily limits the conclusion that can be drawn based on between-site comparisons.

We observed little change in EBP reach in the six control PTSD clinics from the pre- to the post-intervention evaluation period. This suggests that, in the absence of active intervention, there would be very little improvement in EBP reach to patients with PTSD. VHA has developed and disseminated programs and tools to improve PTSD care, such as the PTSD mentoring program for PTSD clinics (Bernardy et al. 2011), PTSD treatment decision aids and EBP promotional materials, as well as performance measures that credit EBP use (Department of Veterans Affairs 2020). However, it may be that clinic-based interventions are needed to improve EBP reach beyond the levels achieved by training and education. In this project, the external facilitator worked with a team champion to influence the inner and, to a lesser extent, the outer (i.e. leadership support for the PTSD clinic’s EBP mission) settings for EBP delivery. One could imagine other interventions, such as those that focus on preparing team leaders to implement EBPs (Aarons et al. 2017) might also affect clinic-level adoption of EBPs for PTSD. It would be useful if future quality improvement or research efforts that build on this work were to compare implementation strategies to evaluate relative advantages, particularly in respect to contextual factors that vary across PTSD clinics or feasibility given limited resources.

Toolkit-guided external facilitation was developed with an eye toward broader dissemination (Brownson et al. 2013). Several factors contribute to the feasibility of VHA adopting this intervention to improve reach in other low-reach PTSD clinics. First, VHA has adopted external facilitation projects and continues to train facilitators for quality improvement (Ritchie et al. 2017). Therefore, facilitation is available and acceptable. Second, the external facilitator was a VHA staff member who integrated external facilitation into other responsibilities, demonstrating that, at least on a small scale, it may not be necessary to hire new staff to extend this work. Third, the data used to generate the project’s audit and feedback reports and implementation outcomes are available through administrative databases and do not require data collection in a research context. Fourth, the PERSIST toolkit was built on the VHA intranet and is thus accessible and adaptable. Fifth, this project was conducted in close collaboration with frontline clinicians and other stakeholders in PTSD care and EBP dissemination. Such partnership is integral to a learning healthcare system (Institute of Medicine 2013) and provides linkages with decision-makers and resources necessary for broader spread.

At the same time, factors could hinder the spread of this work beyond this quality improvement project. The external facilitator had a high level of expertise in facilitation, PTSD specialty care, and EBPs. With only one facilitator, we could not separate the intervention effect from the skill of this individual. Future work should determine the level of expertise needed or how to best scale-up this type of facilitation. We also note that not all low-reach PTSD clinics have a strong local champion with the ability and time to spearhead this type of work and some clinics may be constrained as to the types of changes they can make.

The weaknesses associated with the design of this project warrant attention. Feasibility considerations informed our decision to use a prospective, quasi-experimental design. However, because of lack of random assignment, selection bias cannot be ruled out. It is likely that the intervention and control PTSD clinics differed in ways for which we were not able to adjust. In particular, the control clinics may not have had a staff member who would have been willing and able to serve as a local champion. We did not have interview or other data to aid in interpretation of the unexpected finding of a slight decrease in EBP reach in other mental health clinics at the control sites in adjusted analyses. Similarly, it would be informative to know whether the PTSD clinics in the control sites had made efforts to improve EBP reach on their own that were not successful in the absence of external facilitation. Another limitation concerns our focus on only one implementation outcome—EBP reach. EBP reach was prioritized as an indicator of EBP access that is salient to VHA operational partners who track the number of patients with PTSD who receive at least one template-documented CPT and PE session through a national dashboard. We also had evidence-based information to inform an implementation intervention targeting this outcome (Sayer et al. 2017). More pre-implementation work would be needed to tailor the toolkit to implementation outcomes which may reflect the quality of EBP delivery after treatment initiation and to other clinical contexts, such as low-reach PTSD clinics without an internal champion for EBPs or clinics that treat patients with a range of psychiatric diagnoses. Indeed, the toolkit would have been different had we sought, for example, to improve EBP completion rates, which are known to be low (Kehle-Forbes et al. 2016; Maguen et al. 2019). In this project, the median number of EBP sessions was 4.5 and 5 in the 6-months pre- and 5 and 4 in the 6-months post-intervention in the intervention and control PTSD clinics, respectively. Thus, the intervention did not affect EBP dose rates. Last, we did not evaluate patient outcomes. In routine care, the level of improvement in patient outcomes seen in efficacy and effectiveness studies may be dampened (Chambers et al. 2013). Unfortunately, a hybrid design that evaluates both implementation and effectiveness (Curran et al. 2012) was not feasible in the context of this quality improvement project. Because VHA therapists do not regularly administer and document symptoms measures over the course of psychotherapies other than EBPs, outcome data was not available for other therapy types. Thus, despite the evidence supporting EBPs for PTSD over other treatments (Department of Veterans Affairs/Department of Defense 2017; Hamblen et al. 2019; Lee et al. 2016), there remains some uncertainty as to whether improved EBP reach also resulted in the expected improved clinical outcomes.

Conclusions

Toolkit-guided external facilitation is a promising strategy for improving reach of EBPs for PTSD to patients seen in low-reach PTSD clinics. Findings can inform efforts to improve access to EBPs for PTSD in VHA. Research is warranted to determine whether the effectiveness, efficiency and scalability of external facilitation is enhanced by use of a toolkit that bundles strategies of high performing clinics and serves as a roadmap for facilitation activities.