Randomized controlled trials (RCTs) are the gold standard for generating evidence on the effectiveness of healthcare interventions. Unfortunately, RCTs are frequently uninformative in terms of providing results that patients, clinicians, researchers, or policymakers can confidently apply as the basis for clinical decision-making in the real world.1 Safeguards against uninformative research begin early in study development. Thus, well-conceived pilot studies can play a critical role in the conduct of high-quality clinical trials.2 We propose using implementation science—the study of how to adopt best practices into real-world settings—as a natural framework for pre-RCT pilot studies, viewing these pilot studies as a critical opportunity to improve the informativeness of RCTs.

WHY DO WE NEED RIGOROUS PILOT TRIALS?

The burden of uninformative RCTs is an important one. Money, time, and participants’ efforts are wasted when research is conducted without taking sufficient account of the contextual factors necessary for applying study results. In some cases, these factors are related to standard RCT quality criteria (e.g., CONSORT). In other cases, however, a study’s lack of informativeness is related to inadequate consideration of broader factors. Zarin et al. posited five necessary conditions for a trial to be informative: (1) the study hypothesis must address an important and unresolved question; (2) the study must be designed to provide meaningful evidence related to this question; (3) the study must be feasible; (4) the study must be conducted and analyzed in a scientifically valid manner; and (5) the study must report methods and results accurately, completely, and promptly.3

Unfortunately, many contemporary trials fail these necessary conditions. One overt example is a multicenter randomized assessing the impact of pre-hospital antibiotics for sepsis administered via emergency medical service personnel.4 The trial found no mortality difference, but application of the results is limited by randomization violations—some emergency medical services personnel “purposefully opened the envelopes until they found an envelope instructing randomization to the intervention group.” The motivation for this violation of study procedures was attributed to “overenthusiasm of EMS personnel wanting to treat as many patients as possible with antibiotics.” Plausibly, pre-trial identification of these beliefs about the acceptability of withholding treatment from study patients could have prompted a responsive approach that might have preserved the fidelity of randomization.

Even trials with careful attention to internal validity may provide less meaningful results if the trial context turns out to be different than expected.5, 6 For example, studies of protocolized early sepsis management found no difference between protocolized and usual care groups, largely because sepsis management in the usual care group was similar to the protocolized care group.7 Similarly, a large well-conducted trial evaluating conservative oxygen therapy versus usual care during mechanical ventilation found no difference between groups, but informativeness was bounded by an unexpectedly low dose of oxygen in the usual care group.8 Again, such validity and context limitations could conceivably be mitigated by specifically evaluating and addressing them during the pilot phase.2 Although guidelines are available to support pilot study methodology,9, 10 we suggest that the yield of pre-RCT pilot studies could be further enhanced by applying implementation science principles.

WHAT IS IMPLEMENTATION SCIENCE AND HOW CAN ITS STRATEGIES APPLY TO PILOT STUDIES?

Implementation science is the study of how to best deliver evidence-based practices in the real world.11 It is an emerging field that applies rigorous theory, process models, and frameworks to gain insights and emphasizes transdisciplinary collaboration and stakeholder engagement to promote external validity and scalability.12 In simplified terminology, implementation research identifies how best to help people “do the thing,” where ‘the thing’ is an effective intervention or practice.13 In parallel, pre-RCT pilot studies can be represented as research identifying how to best help investigators conduct informative RCTs. Importantly, a hallmark of implementation science is its focus on multilevel contextual factors.14 Traditional pre-RCT pilot studies evaluate the feasibility of the planned trial’s design focusing primarily on patient-level context and do not typically seek to identify key provider-, organization-, and policy-level contextual factors that may affect the ultimate informativeness of the planned RCT. Using implementation science to inform pre-RCT pilot studies, in contrast, aims to anticipate and ameliorate these contextual factors related to the RCT’s eventual informativeness.

To preempt obstacles leading to uninformative RCTs, we suggest early integration of implementation science principles in the pre-RCT pilot phase. This is particularly applicable to investigations of complex interventions—the nuanced relationship between complex interventions and the clinical and experimental contexts in which they are tested poses greater potential threat to informativeness. We encourage researchers planning RCTs of complex interventions to consider conducting preparatory pilot studies with the following elements:

  1. (1)

    Measure and report implementation outcomes

Pre-RCT pilot studies should have explicit objectives and testable hypotheses or evaluation questions related to implementation of the planned RCT. Because pilot studies are not designed to provide stable estimates of treatment effectiveness, investigators should avoid evaluating efficacy endpoints.15 Rather, valuable objectives can be drawn from the core set of established implementation outcomes including acceptability, adoptability, and feasibility.16 Additionally, given increasing awareness of the importance of patient-centered outcomes but lack of a standardized approach for outcome selection,17 an important role of pilot studies may be to identify and prioritize outcomes that matter most to patients and other stakeholders. Table 1 presents examples of how theses outcomes can be used in pre-RCT pilot studies. Measures should be individualized to each pilot study and can include both quantitative and qualitative outcomes. Sample sizes should be thoughtfully chosen based on the primary outcome measures selected for the pilot study.

  1. (2)

    Apply a conceptual framework

Table 1 Potential Application of Implementation Science Outcomes to Pilot Study Design and Interpretation

A conceptual framework explains phenomena, organizes conceptually distinct ideas, and helps visualize relationships that cannot be observed directly. There are many implementation-oriented frameworks, and it is beyond the scope of this perspective to review them comprehensively. One we have found to be particularly helpful is the Consolidated Framework for Implementation Research (CFIR). The CFIR is a widely used taxonomy in implementation science that was developed to guide systematic assessment of multilevel implementation contexts to identify factors that might influence intervention implementation and effectiveness.18 In the CFIR, five domains are described to interact and influence implementation effectiveness: the intervention, individuals, the inner setting, the outer setting, and the implementation process. Applying this framework to a pilot study enables a richer understanding of the factors that impede or support the conduct of a successful subsequent clinical trial. Figure 1 demonstrates how the CFIR can be used to uncover and organize the complex factors associated with the translation of a pilot study to a successful subsequent RCT.

  1. (3)

    Use the pilot study implementation findings to inform and adapt the planned RCT

Fig. 1
figure 1

Conceptualization of a pilot study within the Consolidated Framework for Implementation Research model. Adapted from www.cfirwiki.net.

Note that Fig. 1 depicts the pilot study intervention component as a jagged puzzle piece that has an imperfect fit into the other domains, but through the application of the CFIR constructs in the pilot process, the piece representing the planned RCT intervention fits much more precisely into the context in which it will be implemented. Depending on the goals and timeline of the preparatory pilot study, this can be a traditional one-time process with careful data collection to inform and adapt the planned RCT, or a rapid cycle iterative improvement process with intervention delivery improvements made in real time. Table 1 includes examples of how pre-RCT pilot study findings might be used to inform a future RCT.

  1. (4)

    Report pilot study results including qualitative and quantitative findings

Pre-RCT pilot studies can be reported as implementation science works, using explicitly defined a priori objectives, rigorous scientific processes, and systematic documentation of the results and subsequent adaptations to the planned RCT. Applying an implementation science framework to preparatory pilot studies will arguably improve the informativeness of future RCTs. Using this grounded approach will also improve pilot study reporting, increasing knowledge gained through dissemination of the results. An explicit implementation science framework ensures that key contextual factors are identified and addressed, as well as rigorously recorded and reported. Reporting implementation science-informed pilot results may improve the informativeness of RCTs by documenting intended implementation-oriented design elements to be included in the RCT or planned elements of the RCT that required redesign due to problems detected by the pilot. In addition, pilot study reports can serve to inform the broader research community about issues related to the studied intervention, and thus advance future research in the target area more broadly.

LIMITATIONS TO THIS APPROACH

The value of conducting implementation science-guided preparatory pilot studies is likely to favor RCTs testing complex interventions and may not be equally informative across all RCT designs. In fact, utility may be differentially observed by implementation outcome (e.g., acceptability of trial primary outcome versus adoption of intervention components). Furthermore, applying an implementation science approach may not be feasible or affordable within traditional research time or budget limits. Investigators should carefully consider the full range of implementation science methods available and select those that best align with the budget, timeline, and objectives of their pilot study (e.g., brief quantitative, electronic surveys versus in-depth, face-to-face interviews with key stakeholders). In addition, changes in traditional research evaluation and funding mechanisms may be required to support this approach to pilot study methodology.

CONCLUSION

Uninformative clinical trials are a major challenge in medicine. Carefully designed, conducted, and interpreted pilot studies can be important foundations for successful clinical trials. Using implementation science to guide these pre-RCT pilot studies may improve the value of investments in RCTs by addressing contextual factors influencing the informativeness of RCT results to the healthcare delivery community and its patients.