Abstract
Pragmatism in clinical trials is focused on increasing the generalizability of research findings for routine clinical care settings. Hybridism in clinical trials (i.e., assessing both clinical effectiveness and implementation success) is focused on speeding up the process by which evidence-based practices are developed and adopted into routine clinical care. Even though pragmatic trial methodologies and implementation science evolved from very different disciplines, Pragmatic Trials and Hybrid Effectiveness-Implementation Trials share many similar design features. In fact, these types of trials can easily be conflated, creating the potential for investigators to mislabel their trial type or mistakenly use the wrong trial type to answer their research question. Blurred boundaries between trial types can hamper the evaluation of grant applications, the scientific interpretation of findings, and policy-making. Acknowledging that most trials are not pure Pragmatic Trials nor pure Hybrid Effectiveness-Implementation Trials, there are key differences in these trial types and they answer very different research questions. The purpose of this paper is to clarify the similarities and differences of these trial types for funders, researchers, and policy-makers. In addition, recommendations are offered to help investigators choose, label, and operationalize the most appropriate trial type to answer their research question. These recommendations complement existing reporting guidelines for clinical effectiveness trials (TIDieR) and implementation trials (StaRI).
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
INTRODUCTION
In the context of the translational research continuum (see Fig. 1), the difference between Explanatory Trials (also known as Efficacy Trials) and Pragmatic Trials (also known as Effectiveness Trials) is well understood.1 Explanatory Trials are designed to answer the question “Can this intervention work under ideal conditions?”2 In contrast, Pragmatic Trials are designed to answer the question “Does this intervention work under routine care conditions?”2,3 Acknowledging that most trials are not pure Explanatory Trials nor pure Pragmatic Trials, the Pragmatic Explanatory Continuum Indicator Summary (PRECIS) tool provides guidelines for describing where on the explanatory-pragmatic continuum a trial falls.2,3 These guidelines are useful for scientific funding agencies to evaluate whether proposed trials address their research priorities, and for researchers and policy-makers to interpret published findings.
On the other end of the translational research continuum (see Fig. 1), the differences between Pragmatic Trials and Implementation Trials are also well understood. Implementation trials test the success of implementation strategies4 designed to promote the use of evidence-based practices previously demonstrated to be effective in routine care. Implementation trials5 are designed to answer the question “Does this implementation strategy successfully promote the use of this evidence-based practice in routine care?” The primary outcomes of implementation trials typically include the (1) proportion of providers who adopted the evidence-based practice, (2) degree to which providers delivered the evidence-based practice with high fidelity (i.e., as intended), and (3) proportion of eligible patients reached by the evidence-based practice.6 Thus, while Pragmatic Trials test the effectiveness of clinical interventions delivered in routine care, implementation trials test the success of implementation strategies designed to promote the use of evidence-based practices in routine care. Implementation Trials may be characterized along an explanatory-pragmatic continuum using the PRECIS-Provider Strategies tool.7,8
To speed up the process by which evidence-based practices are developed and adopted, Curran et al. encouraged researchers to consider using Hybrid Effectiveness-Implementation Trials, defined as a trial “that takes a dual focus a priori in assessing clinical effectiveness and implementation.”9 By conducting one Hybrid Trial instead of sequential Pragmatic and Implementation Trials, the research timeline can ideally be shortened (see Fig. 1). There are three basic types of Hybrid Trials (see Table 1). While Pragmatic Trials and Hybrid Type 1 Trials test the effectiveness of clinical interventions in routine care, Hybrid Type 3 Trials test the success of implementation strategies to promote evidence-based practice use in routine care, and Hybrid Type 2 Trials test both. Due to the close proximity of Pragmatic Trials and Hybrid Trials on the translational research continuum (see Fig. 1), there is less clarity about the similarities and differences between these trial types.
The purpose of this paper is to clarify the similarities and differences between Pragmatic Trials and Hybrid Trials for funders, researchers, and policy-makers. Acknowledging that most trials are not pure Pragmatic Trials nor pure Hybrid Trials, blurred boundaries between trial types can hamper the evaluation of grant applications, and the scientific interpretation of findings. To illustrate this, we highlight a recently published study self-labeled as a “pragmatic cluster randomized control trial” of integrating behavioral health into primary care.10,11 The comparators in this exemplified trial are described as co-location of mental health specialists in primary care versus co-location of mental health specialists in primary care plus an online educational curriculum for providers, implementation workbook, remote quality improvement coaching services for internal facilitators, and an online learning community.10 The clinical intervention in both arms is the same (i.e., co-location), and the comparators are clearly implementation strategies (e.g., none vs. multifaceted support).4 However, the primary outcome is a measure of clinical effectiveness (change in patients’ health status) and the secondary outcome is a measure of implementation success (fidelity to the integrated care model).12 This pragmatically labeled trial, comparing two implementation strategies by examining patients’ health status, is an illustrative example of how easily the similarities between Pragmatic and Hybrid Trials can cause confusion that results in an incongruence between scientific aims and the chosen type of trial.
SIMILARITIES BETWEEN PRAGMATIC AND HYBRID TRIALS
Differences and similarities between Pragmatic and Hybrid Trial types are presented in Table 2. All trial types tend to use methods considered to be pragmatic according to PRECIS, such as specifying broad inclusion criteria and minimal exclusion criteria, being conducted in routine care settings where treatment is delivered by routine care providers, and using intention-to-treat analyses to examine outcomes.2,3 The greatest similarities are usually between Pragmatic and Hybrid Type 2 Trials because they are both designed to measure the effectiveness of clinical interventions which previous research has shown to be effective, at least in some populations, settings, or delivery modalities. In contrast, Hybrid Type 1 Trials are designed primarily to establish the effectiveness of a clinical intervention in routine care, and are usually less pragmatic than Pragmatic Trials (see Fig. 1). Hybrid Type 3 Trials are primarily designed to compare implementation strategies rather than clinical interventions.
DIFFERENCES BETWEEN PRAGMATIC AND HYBRID TRIALS
The differences between Pragmatic and Hybrid Trials have to do with the (1) primary outcomes, (2) specification of implementation strategies, (3) secondary aims, (4) attention to fidelity, (5) use of “artificial” versus “practical” implementation strategies, and (6) use of “evidence-based” versus “novel” implementation strategies.
Primary Outcomes
Pragmatic, Hybrid Type 1, and Hybrid Type 2 Trials compare the effectiveness of two or more clinical interventions, with one often being usual care. Clinical interventions are treatments (e.g., psychotherapy), treatment modalities (e.g., mhealth), or service models (e.g., patient-centered medical homes) designed to directly impact patient outcomes. Thus, for Pragmatic, Hybrid Type 1, and Hybrid Type 2 Trials, the primary/co-primary outcomes are usually specified as patient-level outcomes, such as treatment compliance, procedural complications, side effects, lab results, symptoms, functioning, or hospital readmission. Hybrid Type 2b and Hybrid Type 3 Trials compare two or more implementation strategies, with one often being usual implementation. Powell et al. provide a comprehensive list of implementation strategies,4 some of which are often bundled together. Implementation strategies promote the use of evidence-based practices4 and indirectly impact patient outcomes. We acknowledge that the distinction between clinical interventions and implementation strategies is sometimes blurred, especially with regard to interventions designed to promote patient engagement in treatment (e.g., telehealth trials). Because Hybrid Type 2b and Type 3 Trials are testing implementation strategies that indirectly impact patient outcomes, the primary/co-primary outcome is implementation success reflected by such measures as provider adoption, provider fidelity, and patient reach.6 Most Hybrid Type 3 Trials specify patient-level outcomes as a secondary outcome.
Specification of Implementation Strategies
Historically, the implementation strategies of most Pragmatic and Hybrid Type 1 Trials are not pre-specified (reported before publishing the results) nor post-specified (reported with the published results),13 and if they are, they are not usually called implementation strategies.14 Because Hybrid Type 2 and Hybrid Type 3 Trials are evaluating the success of implementation strategies, often hypothesizing that one is superior to another, the implementation strategies are always pre-specified in grant applications, trial registries, and protocol papers. An important nuance is that many implementation strategies are tailored to specific sites based on a local needs assessment, and thus, strategies can vary across sites during the same trial.15 Similarly, adaptive implementation strategies can be used when more, or more intensive, implementation strategies are deployed when adoption, fidelity, and/or reach are poor at under-performing sites.16,17,18 Nevertheless, Hybrid Type 2 and Hybrid Type 3 Trials pre-specify the tailoring or adaptive nature of the implementation strategies.
Secondary Aims
Another difference between trial types is whether moderation or mediation analyses are specified as secondary aims. Moderation analyses test interaction effects such as whether the impact of the clinical intervention depends on the characteristics of the patients or whether the impact of the implementation strategy depends on the characteristics of providers or clinics. Pragmatic Trials typically conduct moderation analyses to examine treatment heterogeneity among patients.1 Moderation analyses of implementation outcomes are much less common in Hybrid Trials because of the challenges to achieving adequate statistical power. Very large Hybrid Trials conducted in multiple healthcare systems/clinics have the potential to examine whether contextual factors are effect modifiers for the implementation strategy.19 The Consolidated Framework for Implementation Research (CFIR) describes provider-level, organization-level, and environmental-level modifiers that may make an implementation strategy more or less successful.20 Mediation analyses determine how a clinical intervention is improving patient outcomes or how an implementation strategy is promoting the use of an evidence-based practice. Mediation analyses are not typically conducted in Pragmatic Trials, because the mechanisms of action for the clinical intervention have usually already been identified in explanatory clinical trials. Hybrid Type 1 Trials and sometimes Hybrid Type 2 Trials examine whether the mechanisms of action for the clinical intervention identified in explanatory trials are still being targeted effectively when delivered in routine care. Implementation researchers should also conduct mediation analyses to determine whether implementation strategies are successfully targeting the hypothesized mechanism(s) of action.19 An exemplar in this regard is the implementation trial conducted by Williams et al. that randomized 475 mental health clinicians in 14 children’s mental health agencies to usual implementation or to a novel implementation strategy to improve organizational culture.21 Results demonstrated that the implementation strategy significantly and substantially increased the use of evidence-based practices, and that, as hypothesized, improved organizational culture partially mediated the effect.
Attention to Fidelity
Adoption and reach are frequently specified as implementation outcomes in Hybrid Type 2 and Hybrid Type 3 Trials, but are rarely measured or reported in Pragmatic Trials. Therefore, the specification of adoption and reach outcomes is a good indicator that the trial is a Hybrid Type 2 or 3 Trial and not a Pragmatic Trial. In contrast, fidelity is often measured and reported in both Pragmatic and Hybrid Trials.22 Fidelity represents the degree to which the clinical intervention is delivered as intended.23 While adaptation (intentional fidelity-consistent changes to the adaptable periphery of the clinic intervention to improve fit, engagement, and effectiveness)24,25 is encouraged, fidelity-inconsistent deviations to the core intervention components are not.25 A fundamental difference between Pragmatic and Hybrid Trial types concerns the role of fidelity: (1) whether, how, and how much fidelity is intervened upon, (2) whether that is pre-specified in grant applications, trial registries, and protocol papers, and (3) whether fidelity is analyzed as an outcome. Because the purpose of Pragmatic Trials is to estimate the effectiveness of clinical interventions in routine care, fidelity is reported descriptively. Process evaluations are currently recommended for Pragmatic Trials evaluating complex interventions,26,27,28,29,30 to document how well the intervention was implemented in order to interpret the observed effectiveness of the clinical intervention.28,29 However, fidelity in Pragmatic Trials should not be intervened upon more than a healthcare system’s normal quality improvement activities.31 In fact, the PRECIS tool rates how pragmatic a trial is based on how much fidelity is controlled.2 In contrast, Hybrid Type 2 and Hybrid Type 3 Trials test the effectiveness of implementation strategies designed to maximize fidelity. Consequently, Hybrid Type 2 and Hybrid Type 3 Trials often specify fidelity as the primary outcome.6,23
Artificial Versus Practical Implementation Strategies
In Pragmatic, Hybrid Type 2, and Hybrid Type 3 Trials, research teams typically rely on implementation strategies that are, or are expected to be, practical to use outside the context of research. In contrast, many implementation strategies are not feasibly replicated in routine care settings. Such implementation strategies would be characterized as “explanatory” by Domain #4 of the PRECIS-2-Provider Strategies,7 but will be referred to here as “artificial.” Examples of artificial implementation strategies include (1) adoption is increased by using research funds to pay for intervention delivery, (2) reach is increased by advertising in the community for trial participants, and (3) fidelity is increased by monitoring fidelity frequently and re-training and/or removing clinicians with poor fidelity. Hybrid Type 1 Trials are more likely to use artificial implementation strategies while exploring promising practical implementation strategies. For example, in a Hybrid Type 1 Trial of tobacco treatment in oncology centers, Goshe et al. report requiring a week-long training for study counselors, followed by investigator reviews of counseling session recordings, and weekly supervision meetings to review all active cases to optimize fidelity. After the trial is over, focus groups will assess more practical implementation strategies. Importantly, the degree of artificiality may vary depending on the resources available to a particular healthcare system, such that an implementation strategy may be feasible in a healthcare system with good-quality improvement infrastructure, but not in one with inadequate infrastructure. Pragmatic Trials mostly differ from Hybrid Type 1 Trials because they rely on more practical implementation strategies rather than artificial ones. This is why Hybrid Type 1 Trials, but not Pragmatic Trials, need to conduct exploratory research to identify practical implementation strategies. When the implementation strategies used in Pragmatic and Hybrid Type 1 Trials are not described, these two trial types can be indistinguishable.
Evidence-Based Versus Novel Implementation Strategies
Just like clinical interventions, an implementation strategy can fall along the continuum of evidence-based32, to evidence-based in some contexts, to novel. Pragmatic Trials should use evidence-based implementation strategies or compare clinical interventions that do not face meaningful implementation barriers. For example, Flum et al. compared the clinical effectiveness of antibiotics to surgery for appendicitis, both of which had already been adopted into routine care.33 Many Hybrid Type 2b and Hybrid Type 3 Trials compare a novel implementation strategy to a commonly used implementation strategy known to be marginally successful (e.g., train and hope34). For example, in a Hybrid Type 3 Trial, Cucciare et al. randomized rural psychotherapists to standard training or standard training plus computer support (a novel implementation strategy).35 Therapists randomized to training plus computer support were substantially more likely to follow the therapy protocol with fidelity (primary outcome) and their patients experienced statistically greater improvements in symptoms (secondary outcome). These results suggest that the common practice of training psychotherapists in an evidence-based practice and hoping they deliver it per protocol should be replaced with one that provides ongoing fidelity support. Hybrid Type 2b and Hybrid Type 3 Trials can also compare a novel low-intensity implementation strategy to a more resource-intensive implementation strategy that is known to be effective. For example, Kolko et al. describe a Hybrid Type 3 Trial comparing three variants of practice facilitation, an implementation strategy shown to be successful at promoting the uptake of complex clinic interventions.36,37 The three practice facilitation variants are (1) targeting both front-line providers and leadership (evidence-based), (2) targeting front-line providers only (novel), and (3) targeting leadership only (novel). Results will determine whether the less evidence-based and less resource-intensive practice facilitation strategies targeting just front-line providers or just leadership are equally as successful as the more evidence-based and more resource intensive strategy.
IDENTIFYING TRIAL TYPES
Acknowledging that trial types fall along a continuum, the differences between trial types can be demarcated according to the following dimensions: (1) primary outcome, (2) attention to fidelity, (3) artificial versus practical implementation strategies, and (4) evidence-based versus novel implementation strategies. Pragmatic and Hybrid Type 1 Trials specify measures of clinical effectiveness as the primary outcome while Hybrid Type 2 and Hybrid Type 3 Trials specify measures of implementation success as primary or co-primary outcomes. Pragmatic Trials often report fidelity descriptively whereas Hybrid Type 2 and Hybrid Type 3 Trials often specify fidelity as a primary or co-primary outcome. Hybrid Type 1 Trials tend to use artificial implementation strategies, whereas Pragmatic, Hybrid Type 2, and Hybrid Type 3 Trials use practical implementation strategies. Pragmatic Trials differ from Hybrid Type 2 and Hybrid Type 3 Trials in that the implementation strategies should be evidence-based rather than novel. Figure 2 provides a simple decision tree to help identify these various trial types, including suboptimal trial types such as the one described in the introduction that specified a measure of clinical effectiveness as the primary outcome, but only examined one clinical intervention.
FIDELITY DITCHES AND GUARDRAILS IN PRAGMATIC TRIALS AND HYBRID TYPE 2 TRIALS
Because patient health status is specified as the primary/co-primary outcome in both Pragmatic and Hybrid Type 2 Trials, it is critical that fidelity to the evidence-based clinical intervention(s) be sufficiently high enough to produce pre-post clinical improvement among patients on average.14 Therefore, within the context of a process evaluation, fidelity to the core functions28 of the clinical intervention should be monitored during both of these trial types, using practical methods38 (i.e., replicable outside the context of research) if possible. A conceptual challenge to such process evaluations is whether the evaluators should take a passive role (i.e., a summative evaluation in which results are reported at the end of the trial) or an active role (i.e., a formative evaluation in which ongoing feedback is provided during the trial to facilitate the identification and correction of implementation problems).27 It is generally recommended that pragmatic trialists not conduct course corrections to improve fidelity because it compromises external validity.27 However, while fidelity should not be controlled artificially in Pragmatic or Hybrid Type 2 Trials, it is uninformative and unethical to compare evidence-based clinical interventions that are delivered with such low fidelity that patients are not experiencing within-group pre-post clinical improvement.31 Therefore, whenever fidelity is so low that patients are not benefiting clinically (i.e., the ditch), we recommend that the trial should be “rescued,” if possible, by increasing the intensity of pre-specified practical evidence-based implementation strategies and/or adding post hoc implementation strategies (i.e., the guardrails). For example, in a Hybrid Type 2 Trial, Hartzler et al. pre-specified a “fidelity drift alarm” that triggered an additional pre-specified practical implementation strategy (technical assistance to the therapist) to support the rollout of an evidence-based psychotherapy.39 For Hybrid Type 2 Trials, investigators must weigh the disadvantages of making post hoc modifications to the implementation strategies (or adaptive implementation strategies), which may sacrifice the co-primary aim of comparing pre-specified implementation strategies to rescue the co-primary aim of comparing clinical interventions. Note that if artificial implementation strategies must be used to maintain adequate fidelity, the Pragmatic Trial or Hybrid Type 2 Trial becomes a Hybrid Type 1 Trial type by default.
Importantly, it may not always be obvious when fidelity is below the threshold needed to produce clinical improvement. Ideally, data from explanatory trials or Hybrid Type 1 Trials could be used to examine the correlation between fidelity and clinical outcomes to determine the thresholds. In the absence of such data, Data Safety Monitoring Boards should monitor clinical outcomes (masked for Pragmatic and Hybrid Type 2 Trials) and alert investigators when patients are not improving. Likewise, for Hybrid Type 2 Trials that specify fidelity as a co-primary outcome (thus requiring masking), fidelity may need to be monitored by a Data Safety Monitoring Board.
RECOMMENDATIONS
Given the subtle, but important, similarities and differences between Pragmatic and Hybrid Trials, there is the potential for investigators to mislabel their trial type or mistakenly use the wrong trial type to answer their research question. The recommendations depicted in Table 3 should help investigators choose, label, and operationalize the most appropriate trial type to answer their research question. These recommendations complement the reporting guidelines for clinical effectiveness trials (TIDieR) and implementation trials (StaRI).40,41
CONCLUSION
Pragmatic trial methodologies and implementation science evolved from different disciplines.42 Pragmatism is focused on increasing the external validity of research findings. Hybridism is focused on speeding up the research process by making trials less sequential in nature. Yet, Pragmatic and Hybrid Trials share many similar design features, so much so that they are easily conflated. However, there are key differences in the trial types and they answer very different research questions. Because Hybrid Type 1 Trials use artificial implementation strategies, which compromises external validity, they determine whether a clinical intervention can be effective in routine care. Because Pragmatic and Hybrid Type 2 Trials use practical implementation strategies, which optimizes external validity, they determine whether a clinical intervention is effective when delivered in routine care. However, Pragmatic Trials differ from Hybrid Type 2 and Type 3 Trials because the implementation strategies should be evidence-based for the clinical intervention, targeted patient population, and setting. In contrast, because Hybrid Type 2 and Type 3 Trials are designed to determine whether an implementation strategy is successful, the implementation strategies themselves typically do not have an evidence-base associated with their use for the clinical intervention, target population, and/or setting, or are completely novel. While fully acknowledging that most trials will not be pure Pragmatic Trials nor pure Hybrid Trials, we suggest clearly describing (1) whether the primary outcomes are clinical effectiveness and/or implementation success, (2) the degree to which fidelity (and other implementation outcomes) will be controlled and how, and (3) the degree to which the implementation strategies are artificial/pragmatic and evidence-based/non-evidence-based. To ensure a trial is informative and ethical, we also suggest considering pre-specifying fidelity thresholds when feasible in Pragmatic and Hybrid Type 2 Trials that trigger the intensification or addition of implementation strategies to ensure patients are, on average, experiencing pre-post clinical improvement. While the terminology and examples used here are focused on the implementation of clinical interventions, many of the concepts and recommendations may apply to the implementation of other evidence-based practices such as educational innovations.
Data Availability:
There are no data associated with this manuscript.
References
March J, Kraemer HC, Trivedi M, et al. What have we learned about trial design from NIMH-funded pragmatic trials? Neuropsychopharmacology. 2010;35(13):2491-2501.
Thorpe KE, Zwarenstein M, Oxman AD, et al. A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. J Clin Epidemiol. 2009;62(5):464-475.
Loudon K, Treweek S, Sullivan F, Donnan P, Thorpe KE, Zwarenstein M. The PRECIS-2 tool: designing trials that are fit for purpose. BMJ (Clinical Research Ed). 2015;350:h2147.
Powell BJ, Waltz TJ, Chinman MJ, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:21.
Bauer MS, Damschroder L, Hagedorn H, Smith J, Kilbourne AM. An introduction to implementation science for the non-specialist. BMC Psychol. 2015;3(1):32.
Glasgow RE, McKay HG, Piette JD, Reynolds KD. The RE-AIM framework for evaluating interventions: what can it tell us about approaches to chronic illness management? Patient Educ Couns. 2001;44(2):119-127.
Norton WE, Loudon K, Chambers DA, Zwarenstein M. Designing provider-focused implementation trials with purpose and intent: introducing the PRECIS-2-PS tool. Implement Sci. 2021;16(1):7.
Zatzick D, Palinkas L, Chambers DA, et al. Integrating pragmatic and implementation science randomized clinical trial approaches: a PRagmatic Explanatory Continuum Indicator Summary-2 (PRECIS-2) analysis. Trials. 2023;24(1):288.
Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217-226.
Crocker AM, Kessler R, van Eeghen C, et al. Integrating Behavioral Health and Primary Care (IBH-PC) to improve patient-centered outcomes in adults with multiple chronic medical and behavioral health conditions: study protocol for a pragmatic cluster-randomized control trial. Trials. 2021;22(1):200.
Littenberg B, Clifton J, Crocker AM, et al. A cluster randomized trial of primary care practice redesign to integrate behavioral health for those who need it most: Patients with multiple chronic conditions. Ann Fam Med. 2023;21(6):483-495.
Kessler RS, Auxier A, Hitt JR, et al. Development and validation of a measure of primary care behavioral health integration. Fam Syst Health. 2016;34(4):342-356.
Dal-Ré R, Janiaud P, Ioannidis JPA. Real-world evidence: how pragmatic are randomized controlled trials labeled as pragmatic? BMC Med. 2018;16(1):49.
Landes SJ, McBain SA, Curran GM. An introduction to effectiveness-implementation hybrid designs. Psychiatry Res. 2019;280:112513.
Powell BJ, Beidas RS, Lewis CC, et al. Methods to improve the selection and tailoring of implementation strategies. J Behav Health Serv Res. 2017;44(2):177-194.
Swindle T, Rutledge JM, Selig JP, et al. Obesity prevention practices in early care and education settings: an adaptive implementation trial. Implement Sci. 2022;17(1):25.
Fortney JC, Rajan S, Reisinger HS, et al. Deploying a telemedicine collaborative care intervention for posttraumatic stress disorder in the U.S. Department of Veterans Affairs: a stepped wedge evaluation of an adaptive implementation strategy. Gen Hosp Psychiatry. 2022;77:109-117.
Kilbourne AM, Almirall D, Goodrich DE, et al. Enhancing outreach for persons with serious mental illness: 12-month results from a cluster randomized trial of an adaptive implementation strategy. Implement Sci. 2014;9:163.
Lewis CC, Klasnja P, Powell BJ, et al. From classification to causality: advancing understanding of mechanisms of change in implementation science. Front Public Health. 2018;6:136.
Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementat Sci IS. 2009;4:50.
Williams NJ, Glisson C, Hemmelgarn A, Green P. Mechanisms of change in the ARC organizational strategy: increasing mental health clinicians’ EBP adoption through improved organizational culture and capacity. Adm Policy Ment Health. 2017;44(2):269-283.
French C, Pinnock H, Forbes G, Skene I, Taylor SJC. Process evaluation within pragmatic randomised controlled trials: what is it, why is it done, and can we find it?-a systematic review. Trials. 2020;21(1):916.
Proctor E, Silmere H, Raghavan R, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65-76.
Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8:117.
Wiltsey Stirman S, Baumann AA, Miller CJ. The FRAME: an expanded framework for reporting adaptations and modifications to evidence-based interventions. Implement Sci. 2019;14(1):58.
Audrey S, Holliday J, Parry-Langdon N, Campbell R. Meeting the challenges of implementing process evaluation within randomized controlled trials: the example of ASSIST (A Stop Smoking in Schools Trial). Health Educ Res. 2006;21(3):366-377.
Moore GF, Audrey S, Barker M, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ (Clinical Research Ed). 2015;350:h1258.
Esmail LC, Barasky R, Mittman BS, Hickam DH. Improving comparative effectiveness research of complex health interventions: standards from the Patient-Centered Outcomes Research Institute (PCORI). J Gen Intern Med. 2020;35(Suppl 2):875-881.
Oakley A, Strange V, Bonell C, Allen E, Stephenson J. Process evaluation in randomised controlled trials of complex interventions. BMJ (Clinical Research Ed). 2006;332(7538):413-416.
Hawe P, Shiell A, Riley T. Complex interventions: how “out of control” can a randomised controlled trial be? BMJ (Clinical Research Ed). 2004;328(7455):1561-1563.
Ford I, Norrie J. Pragmatic Trials. N Engl J Med. 2016;375(5):454-463.
Grol R, Grimshaw J. Evidence-based implementation of evidence-based medicine. Jt Comm J Qual Improv. 1999;25(10):503-513.
Flum DR, Davidson GH, Monsell SE, et al. A randomized trial comparing antibiotics with appendectomy for appendicitis. N Engl J Med. 2020;383(20):1907-1919.
Adrian M, Lyon AR, Nicodimos S, Pullmann MD, McCauley E. Enhanced “train and hope” for scalable, cost-effective professional development in youth suicide prevention. Crisis. 2018;39(4):235-246.
Cucciare MA, Marchant K, Abraham T, et al. A randomized controlled trial comparing a manual and computer version of CALM in VA community-based outpatient clinics. J Affect Disord Rep. 2021;6:100202.
Kolko DJ, McGuier EA, Turchi R, et al. Care team and practice-level implementation strategies to optimize pediatric collaborative care: study protocol for a cluster-randomized hybrid type III trial. Implement Sci. 2022;17(1):20.
Kolko DJ, Campo J, Kilbourne AM, Hart J, Sakolsky D, Wisniewski S. Collaborative care outcomes for pediatric behavioral health problems: a cluster randomized trial. Pediatrics. 2014;133(4):e981-992.
Hogue A, Ozechowski TJ, Robbins MS, Waldron HB. Making fidelity an intramural game: localizing quality assurance procedures to promote sustainability of evidence‐based practices in usual care. Clin Psychol Sci Pract. 2013;20(1):60.
Hartzler B, Lyon AR, Walker DD, Matthews L, King KM, McCollister KE. Implementing the teen marijuana check-up in schools-a study protocol. Implement Sci. 2017;12(1):103.
Hoffmann TC, Glasziou PP, Boutron I, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ (Clinical Research Ed). 2014;348:g1687.
Pinnock H, Barwick M, Carpenter CR, et al. Standards for Reporting Implementation Studies (StaRI) statement. BMJ (Clinical Research Ed). 2017;356:i6795.
Pawson R. Pragmatic trials and implementation science: grounds for divorce? BMC Med Res Methodol. 2019;19(1):176.
Funding
This work was supported by grants from the Patient-Centered Outcomes Research Institute (PTSD-2019C1-15636), National Institute of Mental Health (UF1 MH121942), and the Department of Veterans Affairs (QUE 20–007, RCS 17–153) to Dr. Fortney. Drs. Fortney and Lyon are supported by the National Institute of Mental Health (P50MH115837). Dr. Curran is supported by the Translational Research Institute (UL1 TR003107), through the National Center for Advancing Translational Sciences of the National Institutes of Health. Dr. Check is supported by the National Institutes of Health (NIH) Pragmatic Trials Collaboratory funded by the NIH Common Fund through cooperative agreement (U24AT009676) from the Office of Strategic Coordination within the Office of the NIH Director, and by the NIH HEAL Initiative (U24AT010961).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Contributors:
None.
Conflict of Interest:
The authors declare that they do not have a conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Prior presentations:
None.
Rights and permissions
About this article
Cite this article
Fortney, J.C., Curran, G.M., Lyon, A.R. et al. Similarities and Differences Between Pragmatic Trials and Hybrid Effectiveness-Implementation Trials. J GEN INTERN MED 39, 1735–1743 (2024). https://doi.org/10.1007/s11606-024-08747-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11606-024-08747-1