INTRODUCTION

In the context of the translational research continuum (see Fig. 1), the difference between Explanatory Trials (also known as Efficacy Trials) and Pragmatic Trials (also known as Effectiveness Trials) is well understood.1 Explanatory Trials are designed to answer the question “Can this intervention work under ideal conditions?”2 In contrast, Pragmatic Trials are designed to answer the question “Does this intervention work under routine care conditions?”2,3 Acknowledging that most trials are not pure Explanatory Trials nor pure Pragmatic Trials, the Pragmatic Explanatory Continuum Indicator Summary (PRECIS) tool provides guidelines for describing where on the explanatory-pragmatic continuum a trial falls.2,3 These guidelines are useful for scientific funding agencies to evaluate whether proposed trials address their research priorities, and for researchers and policy-makers to interpret published findings.

Figure 1
figure 1

Translational research pipeline.

On the other end of the translational research continuum (see Fig. 1), the differences between Pragmatic Trials and Implementation Trials are also well understood. Implementation trials test the success of implementation strategies4 designed to promote the use of evidence-based practices previously demonstrated to be effective in routine care. Implementation trials5 are designed to answer the question “Does this implementation strategy successfully promote the use of this evidence-based practice in routine care?” The primary outcomes of implementation trials typically include the (1) proportion of providers who adopted the evidence-based practice, (2) degree to which providers delivered the evidence-based practice with high fidelity (i.e., as intended), and (3) proportion of eligible patients reached by the evidence-based practice.6 Thus, while Pragmatic Trials test the effectiveness of clinical interventions delivered in routine care, implementation trials test the success of implementation strategies designed to promote the use of evidence-based practices in routine care. Implementation Trials may be characterized along an explanatory-pragmatic continuum using the PRECIS-Provider Strategies tool.7,8

To speed up the process by which evidence-based practices are developed and adopted, Curran et al. encouraged researchers to consider using Hybrid Effectiveness-Implementation Trials, defined as a trial “that takes a dual focus a priori in assessing clinical effectiveness and implementation.”9 By conducting one Hybrid Trial instead of sequential Pragmatic and Implementation Trials, the research timeline can ideally be shortened (see Fig. 1). There are three basic types of Hybrid Trials (see Table 1). While Pragmatic Trials and Hybrid Type 1 Trials test the effectiveness of clinical interventions in routine care, Hybrid Type 3 Trials test the success of implementation strategies to promote evidence-based practice use in routine care, and Hybrid Type 2 Trials test both. Due to the close proximity of Pragmatic Trials and Hybrid Trials on the translational research continuum (see Fig. 1), there is less clarity about the similarities and differences between these trial types.

Table 1 Definitions of Trial Types

The purpose of this paper is to clarify the similarities and differences between Pragmatic Trials and Hybrid Trials for funders, researchers, and policy-makers. Acknowledging that most trials are not pure Pragmatic Trials nor pure Hybrid Trials, blurred boundaries between trial types can hamper the evaluation of grant applications, and the scientific interpretation of findings. To illustrate this, we highlight a recently published study self-labeled as a “pragmatic cluster randomized control trial” of integrating behavioral health into primary care.10,11 The comparators in this exemplified trial are described as co-location of mental health specialists in primary care versus co-location of mental health specialists in primary care plus an online educational curriculum for providers, implementation workbook, remote quality improvement coaching services for internal facilitators, and an online learning community.10 The clinical intervention in both arms is the same (i.e., co-location), and the comparators are clearly implementation strategies (e.g., none vs. multifaceted support).4 However, the primary outcome is a measure of clinical effectiveness (change in patients’ health status) and the secondary outcome is a measure of implementation success (fidelity to the integrated care model).12 This pragmatically labeled trial, comparing two implementation strategies by examining patients’ health status, is an illustrative example of how easily the similarities between Pragmatic and Hybrid Trials can cause confusion that results in an incongruence between scientific aims and the chosen type of trial.

SIMILARITIES BETWEEN PRAGMATIC AND HYBRID TRIALS

Differences and similarities between Pragmatic and Hybrid Trial types are presented in Table 2. All trial types tend to use methods considered to be pragmatic according to PRECIS, such as specifying broad inclusion criteria and minimal exclusion criteria, being conducted in routine care settings where treatment is delivered by routine care providers, and using intention-to-treat analyses to examine outcomes.2,3 The greatest similarities are usually between Pragmatic and Hybrid Type 2 Trials because they are both designed to measure the effectiveness of clinical interventions which previous research has shown to be effective, at least in some populations, settings, or delivery modalities. In contrast, Hybrid Type 1 Trials are designed primarily to establish the effectiveness of a clinical intervention in routine care, and are usually less pragmatic than Pragmatic Trials (see Fig. 1). Hybrid Type 3 Trials are primarily designed to compare implementation strategies rather than clinical interventions.

Table 2 Similarities and Differences Between Pragmatic Trials and Hybrid Effectiveness-Implementation Trials

DIFFERENCES BETWEEN PRAGMATIC AND HYBRID TRIALS

The differences between Pragmatic and Hybrid Trials have to do with the (1) primary outcomes, (2) specification of implementation strategies, (3) secondary aims, (4) attention to fidelity, (5) use of “artificial” versus “practical” implementation strategies, and (6) use of “evidence-based” versus “novel” implementation strategies.

Primary Outcomes

Pragmatic, Hybrid Type 1, and Hybrid Type 2 Trials compare the effectiveness of two or more clinical interventions, with one often being usual care. Clinical interventions are treatments (e.g., psychotherapy), treatment modalities (e.g., mhealth), or service models (e.g., patient-centered medical homes) designed to directly impact patient outcomes. Thus, for Pragmatic, Hybrid Type 1, and Hybrid Type 2 Trials, the primary/co-primary outcomes are usually specified as patient-level outcomes, such as treatment compliance, procedural complications, side effects, lab results, symptoms, functioning, or hospital readmission. Hybrid Type 2b and Hybrid Type 3 Trials compare two or more implementation strategies, with one often being usual implementation. Powell et al. provide a comprehensive list of implementation strategies,4 some of which are often bundled together. Implementation strategies promote the use of evidence-based practices4 and indirectly impact patient outcomes. We acknowledge that the distinction between clinical interventions and implementation strategies is sometimes blurred, especially with regard to interventions designed to promote patient engagement in treatment (e.g., telehealth trials). Because Hybrid Type 2b and Type 3 Trials are testing implementation strategies that indirectly impact patient outcomes, the primary/co-primary outcome is implementation success reflected by such measures as provider adoption, provider fidelity, and patient reach.6 Most Hybrid Type 3 Trials specify patient-level outcomes as a secondary outcome.

Specification of Implementation Strategies

Historically, the implementation strategies of most Pragmatic and Hybrid Type 1 Trials are not pre-specified (reported before publishing the results) nor post-specified (reported with the published results),13 and if they are, they are not usually called implementation strategies.14 Because Hybrid Type 2 and Hybrid Type 3 Trials are evaluating the success of implementation strategies, often hypothesizing that one is superior to another, the implementation strategies are always pre-specified in grant applications, trial registries, and protocol papers. An important nuance is that many implementation strategies are tailored to specific sites based on a local needs assessment, and thus, strategies can vary across sites during the same trial.15 Similarly, adaptive implementation strategies can be used when more, or more intensive, implementation strategies are deployed when adoption, fidelity, and/or reach are poor at under-performing sites.16,17,18 Nevertheless, Hybrid Type 2 and Hybrid Type 3 Trials pre-specify the tailoring or adaptive nature of the implementation strategies.

Secondary Aims

Another difference between trial types is whether moderation or mediation analyses are specified as secondary aims. Moderation analyses test interaction effects such as whether the impact of the clinical intervention depends on the characteristics of the patients or whether the impact of the implementation strategy depends on the characteristics of providers or clinics. Pragmatic Trials typically conduct moderation analyses to examine treatment heterogeneity among patients.1 Moderation analyses of implementation outcomes are much less common in Hybrid Trials because of the challenges to achieving adequate statistical power. Very large Hybrid Trials conducted in multiple healthcare systems/clinics have the potential to examine whether contextual factors are effect modifiers for the implementation strategy.19 The Consolidated Framework for Implementation Research (CFIR) describes provider-level, organization-level, and environmental-level modifiers that may make an implementation strategy more or less successful.20 Mediation analyses determine how a clinical intervention is improving patient outcomes or how an implementation strategy is promoting the use of an evidence-based practice. Mediation analyses are not typically conducted in Pragmatic Trials, because the mechanisms of action for the clinical intervention have usually already been identified in explanatory clinical trials. Hybrid Type 1 Trials and sometimes Hybrid Type 2 Trials examine whether the mechanisms of action for the clinical intervention identified in explanatory trials are still being targeted effectively when delivered in routine care. Implementation researchers should also conduct mediation analyses to determine whether implementation strategies are successfully targeting the hypothesized mechanism(s) of action.19 An exemplar in this regard is the implementation trial conducted by Williams et al. that randomized 475 mental health clinicians in 14 children’s mental health agencies to usual implementation or to a novel implementation strategy to improve organizational culture.21 Results demonstrated that the implementation strategy significantly and substantially increased the use of evidence-based practices, and that, as hypothesized, improved organizational culture partially mediated the effect.

Attention to Fidelity

Adoption and reach are frequently specified as implementation outcomes in Hybrid Type 2 and Hybrid Type 3 Trials, but are rarely measured or reported in Pragmatic Trials. Therefore, the specification of adoption and reach outcomes is a good indicator that the trial is a Hybrid Type 2 or 3 Trial and not a Pragmatic Trial. In contrast, fidelity is often measured and reported in both Pragmatic and Hybrid Trials.22 Fidelity represents the degree to which the clinical intervention is delivered as intended.23 While adaptation (intentional fidelity-consistent changes to the adaptable periphery of the clinic intervention to improve fit, engagement, and effectiveness)24,25 is encouraged, fidelity-inconsistent deviations to the core intervention components are not.25 A fundamental difference between Pragmatic and Hybrid Trial types concerns the role of fidelity: (1) whether, how, and how much fidelity is intervened upon, (2) whether that is pre-specified in grant applications, trial registries, and protocol papers, and (3) whether fidelity is analyzed as an outcome. Because the purpose of Pragmatic Trials is to estimate the effectiveness of clinical interventions in routine care, fidelity is reported descriptively. Process evaluations are currently recommended for Pragmatic Trials evaluating complex interventions,26,27,28,29,30 to document how well the intervention was implemented in order to interpret the observed effectiveness of the clinical intervention.28,29 However, fidelity in Pragmatic Trials should not be intervened upon more than a healthcare system’s normal quality improvement activities.31 In fact, the PRECIS tool rates how pragmatic a trial is based on how much fidelity is controlled.2 In contrast, Hybrid Type 2 and Hybrid Type 3 Trials test the effectiveness of implementation strategies designed to maximize fidelity. Consequently, Hybrid Type 2 and Hybrid Type 3 Trials often specify fidelity as the primary outcome.6,23

Artificial Versus Practical Implementation Strategies

In Pragmatic, Hybrid Type 2, and Hybrid Type 3 Trials, research teams typically rely on implementation strategies that are, or are expected to be, practical to use outside the context of research. In contrast, many implementation strategies are not feasibly replicated in routine care settings. Such implementation strategies would be characterized as “explanatory” by Domain #4 of the PRECIS-2-Provider Strategies,7 but will be referred to here as “artificial.” Examples of artificial implementation strategies include (1) adoption is increased by using research funds to pay for intervention delivery, (2) reach is increased by advertising in the community for trial participants, and (3) fidelity is increased by monitoring fidelity frequently and re-training and/or removing clinicians with poor fidelity. Hybrid Type 1 Trials are more likely to use artificial implementation strategies while exploring promising practical implementation strategies. For example, in a Hybrid Type 1 Trial of tobacco treatment in oncology centers, Goshe et al. report requiring a week-long training for study counselors, followed by investigator reviews of counseling session recordings, and weekly supervision meetings to review all active cases to optimize fidelity. After the trial is over, focus groups will assess more practical implementation strategies. Importantly, the degree of artificiality may vary depending on the resources available to a particular healthcare system, such that an implementation strategy may be feasible in a healthcare system with good-quality improvement infrastructure, but not in one with inadequate infrastructure. Pragmatic Trials mostly differ from Hybrid Type 1 Trials because they rely on more practical implementation strategies rather than artificial ones. This is why Hybrid Type 1 Trials, but not Pragmatic Trials, need to conduct exploratory research to identify practical implementation strategies. When the implementation strategies used in Pragmatic and Hybrid Type 1 Trials are not described, these two trial types can be indistinguishable.

Evidence-Based Versus Novel Implementation Strategies

Just like clinical interventions, an implementation strategy can fall along the continuum of evidence-based32, to evidence-based in some contexts, to novel. Pragmatic Trials should use evidence-based implementation strategies or compare clinical interventions that do not face meaningful implementation barriers. For example, Flum et al. compared the clinical effectiveness of antibiotics to surgery for appendicitis, both of which had already been adopted into routine care.33 Many Hybrid Type 2b and Hybrid Type 3 Trials compare a novel implementation strategy to a commonly used implementation strategy known to be marginally successful (e.g., train and hope34). For example, in a Hybrid Type 3 Trial, Cucciare et al. randomized rural psychotherapists to standard training or standard training plus computer support (a novel implementation strategy).35 Therapists randomized to training plus computer support were substantially more likely to follow the therapy protocol with fidelity (primary outcome) and their patients experienced statistically greater improvements in symptoms (secondary outcome). These results suggest that the common practice of training psychotherapists in an evidence-based practice and hoping they deliver it per protocol should be replaced with one that provides ongoing fidelity support. Hybrid Type 2b and Hybrid Type 3 Trials can also compare a novel low-intensity implementation strategy to a more resource-intensive implementation strategy that is known to be effective. For example, Kolko et al. describe a Hybrid Type 3 Trial comparing three variants of practice facilitation, an implementation strategy shown to be successful at promoting the uptake of complex clinic interventions.36,37 The three practice facilitation variants are (1) targeting both front-line providers and leadership (evidence-based), (2) targeting front-line providers only (novel), and (3) targeting leadership only (novel). Results will determine whether the less evidence-based and less resource-intensive practice facilitation strategies targeting just front-line providers or just leadership are equally as successful as the more evidence-based and more resource intensive strategy.

IDENTIFYING TRIAL TYPES

Acknowledging that trial types fall along a continuum, the differences between trial types can be demarcated according to the following dimensions: (1) primary outcome, (2) attention to fidelity, (3) artificial versus practical implementation strategies, and (4) evidence-based versus novel implementation strategies. Pragmatic and Hybrid Type 1 Trials specify measures of clinical effectiveness as the primary outcome while Hybrid Type 2 and Hybrid Type 3 Trials specify measures of implementation success as primary or co-primary outcomes. Pragmatic Trials often report fidelity descriptively whereas Hybrid Type 2 and Hybrid Type 3 Trials often specify fidelity as a primary or co-primary outcome. Hybrid Type 1 Trials tend to use artificial implementation strategies, whereas Pragmatic, Hybrid Type 2, and Hybrid Type 3 Trials use practical implementation strategies. Pragmatic Trials differ from Hybrid Type 2 and Hybrid Type 3 Trials in that the implementation strategies should be evidence-based rather than novel. Figure 2 provides a simple decision tree to help identify these various trial types, including suboptimal trial types such as the one described in the introduction that specified a measure of clinical effectiveness as the primary outcome, but only examined one clinical intervention.

Figure 2
figure 2

Trial type decision tree. 1An artificial implementation strategy is one that is not feasibly replicated in routine care settings. 2An evidence-based implementation strategy is one that has been proven successful in the same or similar context (e.g., clinical intervention, target population, healthcare setting). 3Suboptimal indicates an incongruence between scientific aims and trial characteristics.

FIDELITY DITCHES AND GUARDRAILS IN PRAGMATIC TRIALS AND HYBRID TYPE 2 TRIALS

Because patient health status is specified as the primary/co-primary outcome in both Pragmatic and Hybrid Type 2 Trials, it is critical that fidelity to the evidence-based clinical intervention(s) be sufficiently high enough to produce pre-post clinical improvement among patients on average.14 Therefore, within the context of a process evaluation, fidelity to the core functions28 of the clinical intervention should be monitored during both of these trial types, using practical methods38 (i.e., replicable outside the context of research) if possible. A conceptual challenge to such process evaluations is whether the evaluators should take a passive role (i.e., a summative evaluation in which results are reported at the end of the trial) or an active role (i.e., a formative evaluation in which ongoing feedback is provided during the trial to facilitate the identification and correction of implementation problems).27 It is generally recommended that pragmatic trialists not conduct course corrections to improve fidelity because it compromises external validity.27 However, while fidelity should not be controlled artificially in Pragmatic or Hybrid Type 2 Trials, it is uninformative and unethical to compare evidence-based clinical interventions that are delivered with such low fidelity that patients are not experiencing within-group pre-post clinical improvement.31 Therefore, whenever fidelity is so low that patients are not benefiting clinically (i.e., the ditch), we recommend that the trial should be “rescued,” if possible, by increasing the intensity of pre-specified practical evidence-based implementation strategies and/or adding post hoc implementation strategies (i.e., the guardrails). For example, in a Hybrid Type 2 Trial, Hartzler et al. pre-specified a “fidelity drift alarm” that triggered an additional pre-specified practical implementation strategy (technical assistance to the therapist) to support the rollout of an evidence-based psychotherapy.39 For Hybrid Type 2 Trials, investigators must weigh the disadvantages of making post hoc modifications to the implementation strategies (or adaptive implementation strategies), which may sacrifice the co-primary aim of comparing pre-specified implementation strategies to rescue the co-primary aim of comparing clinical interventions. Note that if artificial implementation strategies must be used to maintain adequate fidelity, the Pragmatic Trial or Hybrid Type 2 Trial becomes a Hybrid Type 1 Trial type by default.

Importantly, it may not always be obvious when fidelity is below the threshold needed to produce clinical improvement. Ideally, data from explanatory trials or Hybrid Type 1 Trials could be used to examine the correlation between fidelity and clinical outcomes to determine the thresholds. In the absence of such data, Data Safety Monitoring Boards should monitor clinical outcomes (masked for Pragmatic and Hybrid Type 2 Trials) and alert investigators when patients are not improving. Likewise, for Hybrid Type 2 Trials that specify fidelity as a co-primary outcome (thus requiring masking), fidelity may need to be monitored by a Data Safety Monitoring Board.

RECOMMENDATIONS

Given the subtle, but important, similarities and differences between Pragmatic and Hybrid Trials, there is the potential for investigators to mislabel their trial type or mistakenly use the wrong trial type to answer their research question. The recommendations depicted in Table 3 should help investigators choose, label, and operationalize the most appropriate trial type to answer their research question. These recommendations complement the reporting guidelines for clinical effectiveness trials (TIDieR) and implementation trials (StaRI).40,41

Table 3 Design Recommendations for Pragmatic and Hybrid Effectiveness-Implementation Trials

CONCLUSION

Pragmatic trial methodologies and implementation science evolved from different disciplines.42 Pragmatism is focused on increasing the external validity of research findings. Hybridism is focused on speeding up the research process by making trials less sequential in nature. Yet, Pragmatic and Hybrid Trials share many similar design features, so much so that they are easily conflated. However, there are key differences in the trial types and they answer very different research questions. Because Hybrid Type 1 Trials use artificial implementation strategies, which compromises external validity, they determine whether a clinical intervention can be effective in routine care. Because Pragmatic and Hybrid Type 2 Trials use practical implementation strategies, which optimizes external validity, they determine whether a clinical intervention is effective when delivered in routine care. However, Pragmatic Trials differ from Hybrid Type 2 and Type 3 Trials because the implementation strategies should be evidence-based for the clinical intervention, targeted patient population, and setting. In contrast, because Hybrid Type 2 and Type 3 Trials are designed to determine whether an implementation strategy is successful, the implementation strategies themselves typically do not have an evidence-base associated with their use for the clinical intervention, target population, and/or setting, or are completely novel. While fully acknowledging that most trials will not be pure Pragmatic Trials nor pure Hybrid Trials, we suggest clearly describing (1) whether the primary outcomes are clinical effectiveness and/or implementation success, (2) the degree to which fidelity (and other implementation outcomes) will be controlled and how, and (3) the degree to which the implementation strategies are artificial/pragmatic and evidence-based/non-evidence-based. To ensure a trial is informative and ethical, we also suggest considering pre-specifying fidelity thresholds when feasible in Pragmatic and Hybrid Type 2 Trials that trigger the intensification or addition of implementation strategies to ensure patients are, on average, experiencing pre-post clinical improvement. While the terminology and examples used here are focused on the implementation of clinical interventions, many of the concepts and recommendations may apply to the implementation of other evidence-based practices such as educational innovations.