Keywords

Introduction

Overview and Definition of Implementation Research

Implementation research is an emergent discipline born from the recognition that the public does not derive sufficient or rapid benefit from advances in the health sciences [1, 2]. Implementation research bridges the gap between scientific knowledge and its application to daily practice with the overall purpose of improving the health of individuals and populations. One often-quoted estimate claims that it takes an average of 17 years for even well-established clinical knowledge to be fully adopted into routine practice [3]. In addition, approximately half of trials funded by the National Institutes of Health were published in peer-reviewed publications two and a half years after study completion [4].

For example, in 2000, only one-third of patients with coronary artery disease received aspirin when no contraindications to its use were present; [2] furthermore, a landmark study estimated that the American public was only receiving about 55 % of recommended care [5]. Implementation research definitions are shown in Table 13.1.

Table 13.1 Implementation research – definitions and terms

A glossary of terms used in implementation research is now available [13, 14]. The definition of implementation research may be expanded to encompass work that promotes patient safety and eliminates racial and ethnic disparities in health care. Health disparities implementation research aims to identify strategies to close gaps in health care through culturally-appropriate interventions for patients, clinicians, health care systems, and populations [1518]. Under-represented populations make up a significant portion of the U.S. population, shoulder a disproportionate burden of disease, and receive inadequate care [19]. According to the U.S. National Institute of Health (NIH), ‘dissemination and implementation research intends to bridge the gap between public health, clinical research, and everyday practice by building a knowledge base about how health information, interventions, and new clinical practices and policies are transmitted and translated for public health and health care service use in specific settings’ [6].

Gaps in health care may be classified as ‘errors of omission,’ (failure to provide necessary care [20]) and ‘errors of commission,’ such as the delivery of unnecessary or inappropriate care which causes harm. A landmark report from the Institute of Medicine drew attention to patient safety and the concept of preventable injury [21]. Studies of patient safety have focused on ‘medical error resulting in an inappropriate increased risk of iatrogenic adverse event(s) from receiving too much or hazardous treatment (overuse or misuse)’ [20].

For example, inappropriate antibiotic use may promote microbial resistance and cause unnecessary adverse events. Since 1999, public efforts have been underway to promote appropriate prescribing of antibiotics for acute respiratory infections (ARIs) [22]. Based on well-designed studies demonstrating no benefit, guidelines have long recommended against antibiotic use for acute bronchitis; [23, 24] however, physicians continue to prescribe antibiotics for patients diagnosed with ARIs. Although overall antibiotic use for ARIs declined between 1995 and 2002, use of broad-spectrum antibiotic prescriptions for ARIs increased [25]. A more recent implementation research project successfully used a multidimensional intervention in emergency departments to decrease antibiotic prescribing [26].

In response to what may be perceived as overwhelming evidence that thousands of lives are lost each year from errors of omission and commission, there have been strong national calls for health systems, hospitals, and physicians to adopt new approaches for moving evidence into practice, but rigorous supporting evidence is often lacking [27, 28].

As our understanding of implementation science is evolving, local clinicians and health systems must strive to improve the quality of care for every patient. Certain local decisions must be based on combinations of incomplete empiric evidence and personal experience. As with the clinician caring for the individual patient, every decision about local implementation cannot be guided by data from a randomized trial [29, 30]. However, a stronger evidence base is needed to inform wide-spread implementation efforts. Widespread implementation beyond evidence raises concern about unintended consequences and opportunity costs from public resources wrongly expended on ineffective interventions [30].

Implementation researchers use a variety of techniques, ranging from qualitative exploration to the controlled, group-randomized trial. For example, methods used in social, cognitive, and organizational psychology are also applicable to implementation research [31]. Berwick reminds us of the importance of understanding the mechanism and context through which implementation techniques exert their potential effects within complex human systems [32]. Berwick cautioned that important lessons may be lost through aggregation and rigorous scientific experimentation, challenging the implementation research community to reconsider the basic concept of evidence, itself.

Interventions for translating evidence into practice must operate in complex, poorly understood environments with multiple interacting components that may not be easily reducible to a clean, scientific formula. Therefore, we later present situational analysis as a framing device for implementation research. Nonetheless, in keeping with the theme of this book, we mainly focus on the randomized trial as one of the many critical tools for implementation research.

In summary, implementation research is an emerging body of scientific work seeking to close the gap between knowledge generated from the health sciences and routine practice, ultimately improving patient and population health outcomes. Implementation research, which encompasses the patient, clinician, health system, and community, may promote the use of needed services or the avoidance of unneeded services. Implementation research often focuses on patients who are vulnerable because of race/ethnicity or socioeconomic position. By its very nature implementation research is inter-disciplinary.

In this chapter, we discuss barriers to evidence implementation, present tools for implementation research, and provide a framework for designing implementation research studies. The reader is advised that this chapter only provides a basic introduction to several concepts for which new approaches are rapidly emerging. Therefore, our goal is to stimulate interest and promote additional in-depth learning for those who wish to develop new implementation research projects or better understand this exciting field.

Overcoming Barriers to Evidence Implementation

Successful implementation of evidence-based interventions largely depends on their fit with the preferences and priorities of those who shape, deliver, and participate in healthcare [33]. Although the conceptual basis for moving evidence into practice has not been fully developed, a solid grounding in relevant theory may be useful to those designing new implementation research projects [34]. Many conceptual models have been developed in other settings and subsequently adapted for translating evidence into practice [35]. For example, implementation researchers frequently apply Roger’s theory describing innovation diffusion. Rogers proposed three clusters of influence on the rapidity of innovation uptake is influenced by:

  • Perceived advantages of the innovation

  • The classification of new technology users according to rapidity of uptake; and

  • Contextual factors [36].

First, potential users are unlikely to adopt an innovation that is perceived to be complex and inconsistent with their needs and cultural norms. Second, rapidity of innovation uptake often follows a sigmoid-shaped curve, with an initial period of slow uptake led by the ‘innovators.’ Next follows a more rapid period of uptake led by the early adopters, or ‘opinion leaders.’ During the last adoption phase, the rate of diffusion again slows as the few remaining ‘laggards’ or traditionalists adopt the innovation. Finally, contextual or environmental factors such as organizational culture exert a profound impact on innovation adoption, a concept that is explored in more detail in the following sections of this chapter.

Consistent with the model proposed by Rogers, multiple barriers often work synergistically to hinder the translation of evidence into practice [37]. Interventions often require significant time, money, and staffing. Implementation sites may experience difficulties in implementation as a result of limited resources, competing demands, and entrenched practices. For example, the intervention may have been developed and tested under circumstances that differ from those at the planned implementation site. Further, the implementation team may not adequately understand the environmental characteristics proposed by Roger’s diffusion theory as critical to the adoption of innovation. Because of such concerns a thorough environmental analysis is needed prior to widespread implementation efforts [37].

Building upon models proposed by Sung et al. [38] and Rubenstein et al. [8], Figure 13.1 depicts the translational barriers implementation research seeks to overcome. The first translational roadblock lies between basic science knowledge and clinical trial design. The second roadblock involves translation of knowledge gained from clinical trials into meaningful clinical guidance, which often takes the form of evidence-based guidelines.

Fig. 13.1
figure 1

Translational blocks targeted by implementation research

The third roadblock specific to implementation science occurs between current clinical knowledge and routine practice, carrying important implications for individual practitioners, health care systems, communities, and populations. Given the expansive nature of this third roadblock, a multifaceted armamentarium of tools is required. One tool, industrial-style quality improvement, described below in more detail, operates at the level of the clinical microsystem, the smallest, front-line functional unit that actually delivers care to a patient [39]. Clinical microsystems consist of complex adaptive relationships among patients, providers, support staff, technology, and processes of care. To achieve sustainable success, researchers seeking to overcome this third translational barrier need to be effective advocates for changes in larger macrocosms of the healthcare system including local and governmental health policy. Finally, implementation research may inform clinical trials and basic science.

As an underlying source of challenge for the U.S. healthcare system as a whole, health disparities contribute equally as a barrier to overcome in implementation research. Members of minority populations such as African-Americans and Latinos in particular, as well as individuals of low socioeconomic status disproportionately fall victim to markedly dismal health outcomes as compared to their white counterparts [40]. Despite the advancements in health and life expectancy as a country, population specific black-white gaps continue to persist in areas such as access to care, quality of care, chronic disease risk factors, and disease incidence and related mortality [4143]. These examples of inequity have seeped into multiple tiers of the healthcare system from which implementation research is not exempt.

The rapidly changing U.S. demographics indicate that this very marginalized minority population will soon comprise the majority of the U.S. population. The 2012 U.S. Census reported that 50.4 % of 1-year olds born nationwide were racial/ethnic minorities and that Latinos are the largest and fastest growing ethnic group, currently comprising 16.7 % of the population [42, 44]. At this rate of growth, it is postulated that the U.S. will be a majority minority society by 2050, if not sooner [44].

As such, health disparities are an added dimension to the barriers to overcome in evidence implementation. Implementation science aims to make evidence-based findings work in real world patients. However systems, providers and patients in the real world are much different than what is encountered in the research process. Therefore, studies must be specially tailored to include these specific differences among communities and cultures. Should the health disparities in minority groups continue to persist, they will inevitably impede upon the success of implementation research and impose upon the well-being of the nation.

Finally, to promote the spectrum of research depicted in Fig. 13.1, the 2003 NIH Roadmap acknowledges translational research as an important discipline [45]. In fact, several branches of the NIH now have open funding opportunities for implementation research. The integration of research findings from the molecular to the population level is a priority. The Roadmap seeks to join communities and interdisciplinary academic research centers to translate new discoveries into improved population health [46].

Implementation Research Tools

The tools used to translate clinical evidence into routine practice are varied, and no single tool or combination of tools has proven sufficient or completely effective. Furthermore, it may not be the tool itself but how it is implemented in a system that drives change [47]. In fact, this lack of complete effectiveness spurs implementation research to develop innovative adaptations or combinations of currently available tools [48].

Below, we provide an overview of available tools, which are intended as basic building blocks for future implementation research projects. Although different classification systems have been proposed [49], we arranged these tools by their focus: on the patient, the community, the provider, and the healthcare organization. We acknowledge that this classification is somewhat arbitrary because several implementation tools overlap multiple categories.

Patient-Based Implementation Tools

A growing body of evidence suggests that patients may be successfully ‘activated’ to improve their own care. For example, a medical assistant may review the medical record with the patient and encourage the patient to ask questions at an upcoming visit with the physician. Patients exposed to such programs had better health outcomes, such as improved glycemic control for those with diabetes [50, 51]. In another study, a health maintenance reminder card presented by the patient to the physician at appointments significantly increased rates of influenza vaccination and cancer screening [52].

Other interventions have taught disease-management and problem solving skills to improve chronic disease outcomes. Teaching patient self-management skills is more effective than passive patient education, and these skills have been shown to improve outcomes and reduce costs for patients with arthritis and asthma [53]. As part of the ‘collaborative model,’ self-management is encouraged through better interactions between the patient, physician, and health care team. The collaborative model includes: (1) identifying problems from the joint perspective of the patient and clinical care team; (2) targeting problems, setting appropriate goals, and developing action plans together; (3) continuing self-management training and support services for patients; (4) active follow up to reinforce the implementation of the care plan [53].

Community-Based Implementation Tools

The Community Health Advisor (CHA) model has been implemented throughout the world to deliver health messages, promote positive health behavior change, and facilitate access to the health care system [54]. Similarly, the Community Health Worker (CHW) model has been used to engage medically underserved communities on a number of different health issues to help individuals overcome financial, social, political, and cultural barriers to health care [55]. Based on the CHA/CHW models, community members, usually without formal education in the health professions, undergo special training and certification in order to carry out an intervention or research protocol in their local community. CHA/CHW interventions have been used to promote prevention and treatment for a large array of conditions, including cancer, asthma, cardiovascular disease, depression, and diabetes. These programs have also been developed to decrease youth violence and risky sexual behavior, and may be especially relevant for underserved populations and those living in rural areas. Although promising, CHA/CHW interventions often rely on volunteer workers who may be vulnerable to stress and burnout from work overload. Also, intense training and oversight is often required to assure the accuracy of the health messages being transmitted. A review by Swider found limited high-quality evidence that CHA interventions actually improve health outcomes, which is postulated to be a result of the poor quality of the studies. As such, Swider also called for additional rigorous research on the efficacy and underlying mechanisms through which CHA/CHW interventions work [56]. A more recent review commissioned by the Robert Wood Johnson Foundation found that specific CHA interventions may reduce health disparities, particularly for patients with hypertension and diabetes [17].

As a vital community-based implementation tool, it is useful to consider the basic principles of interaction with a community as the foundation for research success. In the city of Lawrence, Massachusetts the Lawrence Research Initiative was created in order to promote community-participatory and community-responsive research [57]. A document to closely guide the research process was created that included:

  • The core principles of a partnership approach to research

  • Questions for research partnership agreements for researchers and community groups to review

  • Steps to building successful research partnerships

  • Glossary of research terms in order to develop a common vocabulary that empowers the community to communicate with researchers.

The core principles of partnership as defined in the Lawrence Research Initiative are notions that are applicable to a broad scope of community-based implementation projects [58]. The principles are as follows:

  • Research is helpful to community development

  • True partnerships between the community and academia make better science

  • Researchers and members of the community can and should create good partnerships based on fairness and positive exchanges.

  • Partnerships should be based on fair and equitable distribution of resources

For community-based implementation research to be truly successful, it should be rooted in the foundation of good community partnerships and relationships. This is a principle that will hold true across the use of community health workers, community health advisors, and community based participatory research. Equitable partnerships are key.

Provider-Based Implementation Tools

Clinical Guidelines

Clinical guidelines have been defined as ‘systematically developed statements to assist practitioners’ and patients’ decisions about appropriate health care for specific clinical circumstances’ [59]. Ideally, guideline development involves a complete review of the relevant literature; however, a Canadian group demonstrated literature reviews were less likely to be completed on more recently developed guidelines [60]. During the last 30 years, guideline dissemination efforts may be suboptimal, leading only to modest improvements in care [60, 61]. However, guideline dissemination alone is not sufficient for implementation of best practices [62].

For many clinical situations encountered today, thousands of evidence-based guidelines and practice recommendations have been published. Such sheer volume often precludes the individual practitioner from implementing all recommendations for every patient. As an example, Boyd et al. noted that if one were to treat a hypothetical 79 year old woman with diabetes, chronic obstructive pulmonary disease (COPD), hypertension, osteoporosis, and osteoarthritis, and follow all recommended guidelines for her multiple co-morbidities, the patient would require 12 medications at a cost of $406 per month [63].

Continuing Medical Education

Continuing medical education (CME), a requirement for ongoing medical licensure, has traditionally relied on text-based, didactic methods to affect clinical knowledge, skills, attitudes, practice patterns, and patient outcomes [64]. However, passive, text-based educational materials and formal CME conferences do not lead to measurable improvements in practice patterns [65, 66]. Rather, CME using interactive techniques which actively engage physicians may have small effects improving practice patterns and patient outcomes [6770]. Physicians who reflect on their own individual performance may identify areas for improvement and seek CME through multifaceted, self-directed learning opportunities. The use of multiple modalities that promote active learning – such as case-based problem solving – has yielded modest improvements in clinical practice [71]. In general, however, more complex behavioral change likely requires practicing skills similar to traditional quality improvement (QI) [72].

With the advantages of being convenient, flexible, and inexpensive, the Internet has become a useful platform to reach a wider audience for interactive CME, while maintaining an effectiveness comparable to traditional approaches [73]. Fordis et al. conducted a randomized controlled trial comparing live, small-group interactive CME workshops with Internet CME [74]. Both groups focused on cholesterol management. All physicians received didactic instruction, interactive cases with feedback, practice tools and resources, and access to expert advice. Knowledge scores for physicians in the Internet CME group increased more than scores for those in the live CME group. Additionally, the online CME group demonstrated a statistically significant improvement in appropriate drug treatment for high-risk patients. Success of the Internet CME may have been partially driven by the participants’ ability to repeatedly return to the website for reinforcement and the ability to structure the learning experience to meet individual needs.

Academic Detailing

Academic detailing relies on site visits to physicians’ offices for intense relationship building and one-on-one information delivery. Important components for successful detailing include: (1) assessment of baseline knowledge and motivations for current behavior; (2) articulating clear objectives for education and behavior; (3) gaining credibility with ties to respected organizations through ongoing relationship building; (4) encouraging physicians to actively participate in educational interventions; (5) using graphic representations for educational materials; (6) focusing on a limited number of ‘take-home’ points; and, (7) supplying positive reinforcement for improved behaviors during follow up [75]. Representatives from pharmaceutical companies have effectively used academic detailing to boost product sales. In a systematic review, academic detailing alone yielded small effects on medication prescribing practices [76].

Opinion Leaders

Several implementation programs have relied on influential colleagues to disseminate evidence-based practices [79]. Opinion leader strategies may include using celebrities, employing people in leadership positions, and asking those doing front-line work to refer ‘up the ladder.’ In a systematic review, opinion leaders may have a positive effect on evidence-based practice uptake when tested in randomized controlled trials [77].

Physician Audit and Feedback

The utility of audit and feedback hinges on developing credible data-driven summaries of how patient populations are being managed. In theory, such reports may prompt clinicians to reflect on their personal clinical practices and motivate subsequent improvement. Performance feedback may focus on outcomes (such as percentage of patients with diabetes who have achieved glycemic control) or process (such as the percentage of patients with diabetes for whom the physician measured glycemic control). The credibility of performance feedback relies on the ability to capture the many clinical nuances that the physician must consider when delivering care to the individual patient. Because the difficulties in capturing these clinical nuances have not yet been completely surmounted, comparisons of performance to a data-driven, peer-based benchmark may be more appropriate than comparison to an arbitrary standard of perfect performance [80]. A systematic review of randomized trials of audit and feedback studies demonstrated small effects on professional performance. The effect varied by which targeted behavior was chosen. Additionally, the analysis suggests that audit and feedback may be more effective when: the baseline performance is low; the information is provided by a manager or colleague multiple times, communicated in verbal and written formats, and when it is linked to specific goals and an action plan [78].

Organization-Based Implementation Tools

Industrial-Style Quality Improvement

This type of improvement activity originated outside of health care and has acquired such labels as Total Quality Management (TQM) and Continuous Quality Improvement (CQI). These approaches make two fundamental assumptions: (a) that poor outcomes are attributable to system failures, rather than lack of individual effort or individual mistakes, and (b) achieving improvement and excellence, even in the absence of system failures, is possible through iterative cycles of planning, acting, and observing the results. In general, complex systems must have built-in redundancy to function well. If an individual makes a mistake at one point in the system, checks and balances built into other parts of the system may prevent an adverse event. However, as described in the example below, patient safety maybe endangered by simultaneous failure of multiple system components, thus defeating built-in redundancy.

As a simple example, multiple mechanisms should be in place to ensure that incompatible blood products are not given to hospitalized patients. Delivery of the wrong blood type to a patient requires failure at multiple points, including preparation of the blood in the blood bank and administration of the blood by the nurse. Taking such a systems approach stands in stark contrast to blaming individuals, thereby avoiding low morale and reluctance to disclose mistakes.

Improvement activity usually proceeds through a series of ‘plan-do-study-act’ cycles. These cycles emphasize measuring the process of clinical care delivery at the level of the clinical microsystem, which has been previously described. Here, small amounts of data guide the initial improvement process. The process emphasizes small, continuous gains through repeated cycles and does not rely on the statistical significance of the measurements. Although many health care institutions have adopted such methodology based on compelling case studies, additional studies with high-quality experimental methods are still needed [81].

Systems Reengineering

Instead of incremental changes to clinical microsystems, major redesign of the entire system may be undertaken. For example, in the 1990s the Veterans’ Health Administration (VHA) undertook a major reengineering of its health care system, focusing on the improved use of information technology (IT), the measurement and reporting of performance, and the integration of services [82]. By 2000, the VHA had made statistically significant improvements in nine areas, including preventive care, outpatient care (diabetes, hypertension, and depression), and inpatient care (acute myocardial infarction and congestive heart failure). Additionally, the VHA performed better than the fee-for-service Medicare system on 12 of 13 quality measures [82]. Because systems engineering requires changes on such a large scale, little evidence exists about its efficiency and effectiveness in yielding more improvements than smaller changes [3].

With the passage of the Affordable Care Act of 2010, U.S. healthcare systems are making necessary changes to how care is delivered and reimbursed in order to increase the quality of care, reduce costs, and improve patient outcomes. The Center for Medicare & Medicaid Innovation (CMMI) [83] has funded hundreds of demonstration projects to test new models of accountable care organizations, primary care transformation, bundled payments of related services, adoption of best practices, and new care and payment models. The aim is for healthcare systems to implement improvements, and demonstrate positive changes to patient outcomes and cost savings, then CMMI will disseminate the knowledge for quick uptake. Further study of how these changes are implemented as well as program evaluation is needed in the coming years.

Computer-Based Systems

Computer-based systems target links in the process of care delivery that are most prone to human error. Such systems may provide clinical decision support by assisting the clinician with making a diagnosis, choosing among alternative treatments, or deciding upon a particular drug dosage. Other functions may include delivery of clinical reminders and computerized provider order entry (CPOE) [84]. A systematic review documented improvements in time to therapeutic goals, decreases in toxic drug levels and adverse reactions, and shorter hospital stays [85]. However, adverse effects of computer-based systems have also been reported, including increased mortality rates, increased rates of adverse drug reactions, delays in medication administration, increased workload, and new types of errors [8689]. A systematic review of studies reporting the effect of CPOE on inpatient medication errors also demonstrated mixed results [90], with increases as well as decreases in errors after the introduction of computer-based systems. Therefore, computer-based systems should not be implemented without safeguards to prevent unintended consequences. We need more work to better understand how computer-based systems interact with human users and the complex health care environment and how these interactions affect quality, safety, and outcomes.

Public Report Cards

Public reports on the quality of health care delivered by institutions are proliferating. For example, public reports may focus on risk-adjusted mortality after cardiac surgery or quality at long-term care facilities. In addition, such reports will probably be expanded to include physician groups and individual physicians. Public reports are often promoted under the assumption that the public will use them to choose high-quality providers, thus better enabling a competitive ‘medical marketplace.’ Although scant evidence links report cards to improved health care [91], report cards may have profound adverse effects: (1) physicians may avoid sicker patients to improve their ratings; (2) physicians may strive to meet the targeted rates for interventions even in situations where intervention is inappropriate; and, (3) physicians may ignore patient preferences and neglect clinical judgment [92]. Even worse, report cards may actually widen gaps in health disparities [92].

The Centers for Medicare & Medicaid Services (CMS) introduced public reporting of quality measures in 2001 [93]. In the last several years, the amount of information accessible to the public has grown tremendously. On the CMS website, the public can see data from hospitals, inpatient rehabilitation facilities, long term care hospitals, nursing homes, and home health agencies.

Pay-for-Performance (P4P)

Currently, there is mounting pressure to tie reimbursement for health care services to quality measurement. Although allowing market forces to freely operate through P4P reimbursement may seem logical, systematic reviews have not yielded conclusive results. Because not everything that is important is currently measured, linking reimbursement to measured quality may divert attention from important, but unmeasured aspects of care (i.e., ‘spotlight’ effect). As with public reporting, P4P may actually widen health disparities, although empiric data are lacking.

To date, evidence for the effectiveness of P4P in improving the delivery of health care is uncertain [94]. One study found that when implemented in physician practice groups, P4P produced improvements for those with higher baseline performance but had minimal effect on the lowest performers [95]. Glickman et al. found hospitals voluntarily participating in the P4P initiative for myocardial infarction did not show appreciable improvement [96]. A recent study found that hospitals participating in P4P and public reporting programs sponsored by CMS had slightly greater improvements in quality than those only participating in the public reporting program [97]. Several ongoing studies may soon deliver new insights about P4P.

Designing Implementation Research Studies

Because the implementation science base is still emerging, researchers have at their disposal an array of tools that are variously effective, depending upon the patient population and delivery setting. Moving beyond the tools described above, is the need to develop innovative adaptations and approaches to bridge the gap between clinical knowledge and health care practice. It is necessary to test the effectiveness of these new approaches with rigorous scientific methods to avoid adverse consequences from the wide-spread dissemination and adoption of unproven interventions [30]. Therefore, in the remainder of this chapter, we discuss the critical design elements for implementation randomized controlled trials, followed by an example of an implementation research study.

Overview of Implementation Research Study Design

Randomized designs for implementation research, somewhat analogous to the traditional clinical trial, allow causal inference and offer protection from measured and unmeasured confounding (See Chap. 3) [48]. As described below in more detail, such designs include an active intervention, random allocation to a comparison or intervention group, and blinded assessment of objective endpoints. Although many of the same principals are involved in clinical RCTs and implementation RCTs, these will be reviewed with emphasis on some of the differences between the two.

Falling lower in the hierarchy of evidence, implementation studies may use other non-randomized or controlled designs. For example, a research team may observe a single group for changes in health care delivery or patient outcomes before and after intervention implementation. In this case, the observed changes may result from multiple factors not associated with the intervention. Secular trends may produce broad, population-based changes, independent of the intervention under study. Without a comparison group, secular trends may be confused with intervention effects [48]. Interrupted time-series designs, with data collected from multiple points in time before and after the intervention, can better account for secular trends.

In addition to confounding from secular trends, uncontrolled study designs are susceptible to other ‘non-interventional’ aspects of the intervention. For example, an intervention may bestow more attention on patients or clinicians through data collection, leading to self-reported improvement through placebo-like effects. Comparison groups, even without randomization, offer important protection against secular trends and placebo-like effects. Non-randomized allocation to intervention and comparison groups does not assure that both groups are similar in all important characteristics. Matched study designs may balance study groups for a limited number of measured characteristics. In contrast, successfully implemented randomization equalizes recognized and unrecognized confounders across study groups and is, therefore, essential for cause-and-effect inference.

In summary, limitations of study designs without randomization or a comparison group include difficulty establishing causality, confounding, bias, and spurious associations from multiple comparisons [29]. Although such studies are generally considered to be lower within the evidence hierarchy, they may provide useful information when randomized controlled trials (RCTs) are not feasible or generate important hypotheses for subsequent testing with more rigorous study designs. We focus the remainder of this chapter on key RCTs for implementation research, in particular the cluster RCT – where clusters of individuals (groups) are randomized [98] rather than individuals. Due to the complex design, we strongly recommend that investigators obtain expert consultation with methodologists and statisticians early during the planning stages.

Other study designs applicable for implementation research and quality improvement projects are reviewed elsewhere [99, 100]. Proctor et al. reviews funding mechanisms and provides key ingredients for implementation research proposals (Table 13.2) [33]. Competencies for trainees of implementation and dissemination research are also available elsewhere [101].

Table 13.2 Key components for implementation research proposals

Implementation Randomized Controlled Trials

Many principles for the design of high-quality, traditional RCTs discussed elsewhere in this book also apply to implementation research. As a discussion guide, our approach parallels the Consolidated Standards of Reporting Trials (CONSORT), which were designed to encourage high-quality clinical randomized trials and promote a uniform reporting style. The CONSORT criteria emphasize the ability to understand the flow of all actual and potential research participants through the experimental design. Although originally designed for the traditional or ‘parallel’ clinical trial [102, 103], the CONSORT criteria were subsequently modified for the cluster RCT [104, 105].

We refer the reader to specific example of an implementation randomized trial illustrating the formative development of one of the outcomes [106], challenges and barriers with recruitment [107], main outcome, [108] and secondary outcomes [109] (ClinicalTrials.gov Identifier: NCT00403091; Available at: http://clinicaltrials.gov).

Participants and Recruitment

In contrast to the randomized clinical trial where patients are the unit of intervention and analysis, implementation randomized trials and interventions have a broader reach. For example, key participants in implementation RCTs may be doctors, patients, clinics, or hospitals, or hospital wards. Because implementation research is conducted in the ‘real world’ and often seeks to engage busy clinicians, systems, and patients who are otherwise overwhelmed with their usual activities, recruitment may be particularly difficult. Therefore, recruitment protocols for implementation research demand careful consideration and may require a dedicated recruitment and retention team that is specific to the target population. Often multiple approaches (e.g., word of mouth, e-mail, phone, fax, personal contacts, or lists from professional organizations) must be pursued, and still the desired number of participants may not be reached. This is a particular challenge as it pertains to recruiting individuals and engaging systems that cater to marginalized minority populations.

African Americans and Latinos continue to bear an unequal burden of disease. Individuals from these populations are underrepresented in implementation research. To reach wide applicability, a diverse pool of participants in research studies is necessary. However, racial and ethnic minorities remain underrepresented in research participation. For example, less than one-third of those enrolled in research studies sponsored by the National Institutes of Health (NIH) are minorities [110, 111],– African Americans comprising 12.6 % and Latinos 7.5 % [111].

Minorities have often been underrepresented in traditional clinical research studies for several reasons. Researchers and participants often do not share common cultural perspectives, which may lead to lack of trust. Moreover, limited resources, such as low levels of income, education, health insurance, social integration, and health literacy, may also preclude participation in research studies studies [112]. In addition, the history of racism in the U.S. and particularly in medical research and clinical care, has contributed to deep suspicion among minority communities about the motives of the medical system [113115].

Low research participation from communities of color stems directly from these historical inequities and power imbalances that have created a lack of trust between community and academic medical institutions.

Within the past two decades, a series of nationwide mandates for federally funded research have been created in order to directly address the concerns of distrust in these populations including: the NIH Revitalization Act created in 1993 and updated in 2001 mandating the inclusion of women and minorities in clinical trials [41, 116], the 1997 Federal and Drug Administration (FDA) Modernization Act providing strict requirements on the standardization of data collection on racial/ethnic minority groups in clinical trials [116], and the Centers for Medicare and Medicaid Services (CMS) authorization of routine care costs for Medicare beneficiaries who are participants in clinical trials in 2000 [116].

Despite these mandates, challenges in the recruitment of minorities still exist. Chapter 8 of this textbook offers additional insight on broad recruitment strategies for implementation research. Table 13.3 offers some solutions to these commonly faced barriers [117].

Table 13.3 Solutions to commonly faced barriers to minority recruitment

Human Subjects

Review and approval of implementation studies by an institutional review board (IRB) is necessary. Often, the research protocol may pose minimal danger to participants and the review may be conducted under an expedited protocol. We refer the reader to more detailed reviews on this topic [118120]. Randomization, intent to publish study findings, or present at scientific conferences places the work in the research domain. Although usual local quality improvement activities, which are important in health care, do not require IRB approval, the addition of a rigorous design for implementation research does require review.

Investigators designing cluster RCTs must carefully consider the ethical issues that arise when consent occurs at the cluster level with subsequent enrollment of participants within the cluster. If the target of the research is clearly the clinician, informed consent may often be waived for the patient. For studies that focus on the clinician but collect outcomes from medical record review or administrative patient records, the researchers may consider applying for a waiver of informed patient consent. Such waivers are especially reasonable when a large volume of patient records would make patient informed consent impractical. Implementation research usually generates personally identifiable health information, which may be subject to the Health Insurance Portability and Accountability Act (HIPAA). Waiver of HIPAA consent by the patient may often be obtained based on requirements similar to waiver of informed patient consent. Finally, it may be necessary to obtain consent from both patients and providers if the intervention targets both populations.

Investigators should develop detailed plans to protect the security and confidentiality of study data. Data should be housed in physically secured locations with strong logical protection, such as password protection and encrypted files. Access to study data should be only on a ‘need-to-know’ basis. Participant identifiers should be maintained only as necessary for data quality control and linkage. Patients and clinicians should be assured that personal information will not be revealed in publications or presentations. Data integrity should also be protected with detailed protocols for verification and cleaning, which are beyond the scope of this chapter [121].

We agree with the International Committee of Medical Journal Editors (ICMJE) that descriptions of all randomized clinical trials should be deposited in publically available registries before recruitment begins [122]. The ICJME includes interventions focusing on process-of-care within the rubric of clinical trials. Trial registries guard against the well-recognized bias that negative studies are less likely to be published than positive studies. Negative publication bias may significantly limit meta-analytic studies, leading to the false conclusion that ineffective interventions are actually effective. Registries also increase the likelihood that participation in clinical trials will promote the public good, even if the study is negative. Although the template is not customized for implementation research, one such registry may be found at http://clinicaltrials.gov.

Intervention Design

Based on the concepts described earlier in this chapter, the design of the intervention is often guided using a formative-evaluation process [123, 124]. Formative evaluation incorporates input from end users and stakeholders to refine an intervention during the early stages of development. It is critical that investigators carefully explore and understand the need of those who will be affected by the intervention. For example, the design can be guided by focus groups or nominal group technique [125128]. Glasgow et al. [37] recommend key features to include in the content design:

  • barrier analysis

  • integration of multiple types of evidence

  • adoption of practical trials that address clinician concerns

  • investigation of multiple outcomes, generalizability, and contextual factors

  • design of multilevel programs using systems and social networking models mindful of the integration of the study’s components and levels, and

  • adaptation of program to local needs and ongoing issues.

For example, for an internet-delivered intervention for physicians important features to consider include [129]:

  • needs assessment from office practice data

  • multimodal strategies

  • modular design with multiple parts

  • clinical cases for contextual learning

  • tailoring intervention based on individual responses

  • interactivity with the learner

  • audit and feedback

  • evidence-based content

  • established credibility of organization providing website and funding entity

  • patient education resources

  • high level of usability, and

  • accessibility to the Internet site despite limited bandwidth.

Comparison Group

In behavioral research, it is often appropriate to randomize participants to either an active intervention versus an attention control. The attention control – in contrast to ‘placebo’ or no intervention- accounts for changes in behavior attributable to social exposure when participants receive services and attention from study personnel [130]. Positive social interactions may create expectations for positive outcomes, potentially confounding intervention effects collected through such methods as self-report. However, the precise implementation of attention controls may be difficult [131].

In our experience, clinicians and communities may be reluctant to enter a study with the possibility of being randomized to a group with no apparent benefit. This problem may be compounded by intensive procedures needed for data collection, regardless of the study group. To overcome such barriers, investigators may offer to open the intervention to the comparison group at the close of the study. Alternatively, study design might more formally incorporate a delayed intervention or test two variations of an active intervention.

Blinding

‘Blinding’ is important to decrease bias in outcome ascertainment (similar to randomized clinical trials). Study personnel who perform outcome assessment should be unaware of whether an individual participant has been assigned to the intervention or comparison group. For example, it may be necessary to blind those doing patient examinations, those performing medical record abstraction, or those administering patient, physician, or organizational surveys. When participants are blinded to the allocation arm, the study is single-blinded. If those delivering the intervention and collecting the outcomes are blinded as well, then the study is double-blinded. If the analysts are unaware of the assignments, then the study is triple-blinded. For implementation research, it is often not feasible to conceal study allocation from the research team.

Units of Intervention, Randomization, and Analysis

Investigators planning an implementation randomized trial must carefully consider the units of study assignment for intervention, randomization, and analysis. Examples of units of intervention are patients, physicians, nurses, clinics, hospitals, hospital wards, among others. Within any given study, the unit level may vary across components, meaning that the analysis plan must account for the clustered nature of the outcome data.

For example, consider a study of a patient-based intervention that will be implemented through a group of affiliated multi-physician clinics. ‘Contamination’ could arise from physicians learning about the intervention and then exposing comparison patients to part of the intervention. Therefore, for this particular study, the investigators may choose to randomize at the physician level to avoid contamination. Thus, all patients assigned to a given physician will be allocated to the same condition: intervention or comparison.

In practice, the threat of contamination may be more perceived than real, depending upon the exact nature of the intervention and study setting. When present, contamination decreases the precision with which the intervention effect will be measured and increases the risk of a Type II error. As an alternative to cluster-based randomization to overcome contamination, the sample size could be increased [122].

Measurement and Outcomes

In implementation research, the science of determining an approach to define the measures to obtain and the specific outcomes is rapidly evolving [132]. Concepts of treatment integrity utilized in the traditional randomized controlled trial also apply to implementation research, also known as treatment fidelity. In addition, concepts are also applicable for the assessment of external validity – the applicability of the findings in other settings.

The use of a systematic strategy allows the implementation researcher to plan ahead and define the measures and relevant outcomes. The ultimate goal is to have a strong foundation for formative and summative evaluations utilizing quantitative and qualitative methods. Similar in importance as knowing whether an intervention worked (or did not) is to understanding ‘how’ the intervention worked (or did not).

Research that uses a mixed methods (or multimethods) approach is suitable to understand problems from multiple perspectives and contextualize information [133]. Mixed methods research is defined as “the type of research in which a researcher or team of researchers combines elements of qualitative and quantitative research approaches (e.g., use of qualitative and quantitative viewpoints, data collection, analysis, inference techniques) for the broad purposes of breadth and depth of understanding and corroboration” [134].

Strategies are available to design, evaluate, and report implementation research studies. A systematic review and a book are available elsewhere; [135, 136] in this chapter, we briefly review the following:

  • Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM)

  • Pragmatic-Explanatory Continuum Indicator Summary (PRECIS)

  • Predisposing, Reinforcing, and Enabling Constructs in Ecosystem Diagnosis and Evaluation (PRECEDE)-Policy, Resourcing, and Organization for Educational and Environmental Development (PROCEDE).

  • Realist Evaluation

The ‘RE-AIM’ approach is particularly helpful to evaluate the public health impact of interventions [37, 137142]:

  • Reach – the intended target population, study’s reach and representativeness, participants and setting – ‘How many participate?’

  • Effectiveness – the magnitude of intervention effect, adverse outcomes, and costs – ‘Does it work in usual settings?’

  • Adoption – use by the target audience – ‘How many use it?’

  • Implementation – the consistency of use, costs, and adaptations made during delivery – ‘Is it used as intended?’

  • Maintenance – the intervention’s long-term effects, sustainability, and attrition rates - ‘Is it sustained over time?’

The Pragmatic-Explanatory Continuum Indicator Summary (PRECIS) originated from a Canadian and European initiative to promote trials in developing and middle-income countries [143]. Within this context, a pragmatic trial seeks to answer the question, “Does an intervention work in usual settings under usual conditions?;” a pragmatic trial tests the effectiveness of the intervention and informs decision makers [144]. An explanatory trial seeks to answer the question, “Does an intervention work in research settings?;” an explanatory trial tests the efficacy of an intervention [145, 146].

The visual representation of measures among the ten domains described in PRECIS allows scientists and community in general understand the applicability of the interventions. The ten domains include: (1) participant eligibility criteria, (2) experimental intervention flexibility, (3) practitioner expertise (experimental), (4) comparison intervention, (5) practitioner expertise (comparison), (6) follow-up intensity, (7) primary trial outcome, (8) participant compliance, (9) practitioner adherence, and (10) analysis of primary outcome. For a more detailed discussion, the reader is referred elsewhere [143, 147].

First proposed in 1974, the PRECEDE-PROCEED is an approach to assess the effects of health programs and health education applicable for implementation research [148150]. The realist approach offers a quantitative and qualitative model for synthesis of the effects of complex programs that are intimately related to the contextual factors where the program is developed and evaluated [151153]. A realist approach addresses some of the short-comings of purely quantitative methods for program evaluation [152].

Reviews are available to guide the design, measurement, and reporting of implementation research studies [132, 142, 144, 154]. The addition of information about the context, protocol implementation, and generalizability – among other characteristics – are enhancements to the CONSORT reporting guidelines for the traditional efficacy study [142, 154].

Approaches to Randomization

Randomization, also described elsewhere in this book, is a procedure to assure that study units are allocated to the study conditions according to chance alone. The specific approach to randomization is described as ‘sequence generation’ and may include matching or stratification [104]. Allocation concealment is a ‘technique used to prevent selection bias by concealing the allocation sequence from those assigning participants to intervention groups, until the moment of assignment, the purpose of which is to prevent researchers from influencing which participants are assigned to a given group [102, 103]. The concealment may be simply based on a coded list of randomly ordered study groups created by a statistician who is not a member of the intervention team. After enrollment, each participant is assigned to a study group based on the sequence in the list.

For cluster-randomized trials, the assignment of individuals to a study group is determined at the level of the cluster, which increases the opportunity for selection bias from failed concealment. For example, consider a cluster RCT where randomization occurs at the physician level with subsequent enrollment of patients with diabetes from the physicians’ practice. Depending upon the nature of the intervention, physicians may be able to determine their randomization group. If the randomized physician also recruits patients for the study, this knowledge of the randomization group may lead to biased patient selection. An ‘attention control’ comparison group described above would also decrease the chances of the physician to discover the assignment.

Successful randomization ensures balanced characteristics at the unit of randomization, and larger numbers of randomized units increase the chance of successful randomization. Investigators should be aware that for cluster RCTs, successful randomization does not ensure balanced characteristics at units below the level of randomization [155]. Again, consider the illustration above where randomization occurs at the physician level. Although this design may produce intervention and comparison groups that are balanced based on physician characteristics, there may be important imbalances in patient characteristics, decreasing the power of randomization. To guard against imbalances of lower-level units in cluster randomized trials, investigators might consider stratifying or matching on a limited number of critical characteristics [156]. Alternatively, imbalances may require statistical adjustment at the point of analysis after the study has been completed. Decisions about matched study designs for cluster randomized trials are complex and beyond the scope of this chapter.

Intent-to-Treat

As with the traditional clinical randomized trial, the primary analysis for an implementation randomized trial should test hypotheses specified a priori and should follow intent-to-treat principles [157]. With the intent-to-treat approach, all units are analyzed with the group to which they were originally randomized, regardless of whether the units are subsequently exposed to the intervention (i.e., cross over). For example, in a randomized trial of an Internet-based continuing medical education (CME) intervention for physicians, outcomes for all physicians randomized to the intervention group must be analyzed as part of the intervention group, regardless of whether the physician visited the Internet site. Intent-to-treat protocols preserve the power of randomization by protecting against bias resulting from differential participation or cross-over among intervention units with a greater or lesser propensity for success.

Unfortunately, participants lost to follow up may generate no data for analysis. As with violation of the intent-to-treat principle, loss to follow up may reduce the power of randomization. Although complete follow up is desirable, it is usually not obtainable. Many scientists hold that for clinical trials, loss to follow up of greater than 20 % introduces severe potential for bias [158]. Therefore, many study designs include run-in phases before randomization. From the perspective of internal validity, it is better to exclude participants before randomization than have participants lost to follow up, cross between study groups, or become non-adherent to intervention protocols after randomization. For example, in the study of Internet-based CME described above, physicians might be required to demonstrate a willingness to engage in Internet learning and submit data for study evaluation before randomization. According to the CONSORT criteria for group randomized trials, investigators must carefully account for all individuals and clusters that were screened or randomized [104].

Retention, Special Populations

Retention of research participants is a challenging issue in research, and can be of particular concern when working with vulnerable and underserved populations. The broad goal of implementation science is to translate evidence-based practice into real world application. Specific to this goal, there is an overarching need to target and tailor implementation to the specific system, providers, and patients that exist in a given community. Methods that to date have been typically implemented at the system and provider level within academic health settings, and included predominately homogenous white patient participation, will not translate well in more diverse, community driven settings.

Lack of retention and loss to follow up can be a barrier in implementation research, especially for projects concerning health disparities in minorities. In the recently conducted Healthy Aging in Neighborhoods of Diversity Across the Life Span (HANDLS) study, Ejogu et al. present a multifaceted approach to recruitment and specifically retention strategies for minority and low socioeconomic status (SES) participants [159]. In this 20-year longitudinal examination of how race and SES influence the development of age-related health disparities, the investigators created a multifactorial recruitment and retention strategy that targeted known barriers and identified those unique to the study’s urban environment [159]. Through this approach, they were able to recruit over 3,700 participants, of whom 59 % were African American with a 75 % baseline completion [159]. The success of the HANDLS investigation relied primarily on the emphasis of the community-based platform to alleviate many of the barriers that might exclude this key population from participation and retention.

Underrepresented minorities are a special population whose participation in implementation research holds promise in revealing methods to reduce health disparities [160162]. It may be difficult to ascertain the true population benefit or effectiveness of an intervention if a significant proportion of its participants are lost to follow-up. While Chap. 8 of this textbook is solely dedicated to addressing broad strategies for recruitment and retention, this section offers specific insight into retention issues pertaining to the participation of minorities. As an overarching challenge between the health care system and minority communities, the establishment of trust continuously strikes a chord as a key necessity in retaining the attention and participation of this population in implementation research [160, 163]. Yancey et al. suggest some targeted approaches to decreasing participant loss specifically in underrepresented and minority groups [160162, 164166]:

  • Intensive follow-up and contact with subjects

  • Retain interviewers, field staff, and study staff over time

  • Involve staff from the targeted community

  • Provide social support and offering accessible locations for study visits and/or data collection

  • Ensure timely incentive payments and accessibility of project staff

  • Encourage study staff’s knowledge of community dynamics and project leadership/staff visibility and involvement in the community.

Statistical Analysis

Statistical analysis for cluster RCTs is a vast, technical topic that falls largely beyond the domain of the basic introduction provided in this book. However, an example will illustrate some important principles. More specifically, consider the previous illustration in which physicians are randomized to an intervention or comparison group, with patients being subsequently enrolled and assigned to the same study condition as their physician. To conduct the analysis at the physician level, the investigators might simply compare the mean post-intervention outcomes for the two study groups. However, this approach leads to loss of statistical power, because the number of physicians randomized will be less than the number of patients included in the study. Alternatively, the investigators could plan a patient-level analysis that appropriately considers the clustering of patients within physicians. The investigators could also collect outcomes for intervention and comparison patients before and after intervention implementation. Generalized estimation equations could then be used to compare the change in study endpoints over time for the intervention versus comparison group. Here, the main study effect will be reflected by a group-time interaction variable included in the multivariable model. This approach uses a marginal, population-averaged model to account for clustered observations and potentially adjust for observed imbalances in the study groups. Alternatively, the analyst may use a cluster-specific (or conditional) approach that directly incorporates random effects. Murray reviewed the evolving science and controversies surrounding the analysis of group-randomized trials [156].

Although the main analysis should follow intent-to-treat principles as described above, most implementation randomized trials include a range of secondary analyses. Such secondary analyses may yield important findings, but they do not carry the power of cause-and-effect inference. ‘Per-protocol’ or ‘compliers only’ analyses may address the impact of the intervention among those who are sufficiently exposed or may examine dose-response relationships between intervention exposure and outcomes. Mediation analysis using a series of staged regression models may investigate mechanisms through which an intervention leads to a positive study effect [167, 168].

Sample Size Calculations

The investigator must determine the number of participants necessary to detect a meaningful difference in study endpoints between the intervention and comparison groups, i.e., the power of the study. Typically, a power of 80 % is considered adequate to decrease the likelihood of a false negative result. If an intervention is sustained over an extended period of time, the investigators may wish to test specifically for effect decay, perhaps with a time-trend analysis. Such a hypothesis of no difference demands a special approach to power calculation. Sample size calculations for traditional randomized trials are discussed elsewhere in this book (see Chap. 15).

The analysis for an implementation randomized trial may be at a lower level than the unit of randomization. Under these circumstances, the power calculations must account for the clustering of participants within upper-level units, such as the clustering of patients within physicians from the example above. Failure to account for the hierarchical data structure may inflate the observed statistical significance and increase the likelihood of a false positive finding [169].

Several approaches to accounting for the clustering of, say, patients within physicians from the above example, rely on the intra-class correlation coefficient (ICC). The ICC is the ratio of the between-cluster variance to the total sample variance (between clusters + within cluster). In this example, the ICC would be a measure of how ‘alike’ patient outcomes were within the physician clusters. If the ICC is 1, the outcomes for all patients clustered within a given physician are identical. If the ICC is 0, clustering within physicians is not related to patient outcomes [170]. In other words, with an ICC of 1, adding additional patients provides no additional information. Therefore, as the ICC increases, one must increase the sample size to retain the same power. For 0 < ICC < 1, increasing the number of patients will increase study power less than increasing the number of physicians. Typical values for ICCs range from 0.01 to 0.50 [171].

Although the topic of power calculations for group randomized trials is vast and largely beyond the scope of this book, Donner provides a straight-forward framework for simple situations [169]. Taking this approach, the analyst first calculates an unadjusted sample size (Nun) using approaches identical to those described elsewhere in this book for the traditional randomized clinical trial. Next, the analyst calculates a sample inflation factor (IF) that is used to derive a cluster-adjusted sample size (Nadj). Then:

$$ \mathrm{IF}=\left[1+\left(\mathrm{m}-1\right)\ \rho \right]\mathrm{and} $$
$$ {\mathrm{N}}_{\mathrm{adj}}=\left({\mathrm{N}}_{\mathrm{un}}\right)\ast \mathrm{IF}, $$

where m is the number of study units per cluster, and ρ is the ICC.

Situational Analysis and External Validity

Because implementation randomized trials occur in a ‘real-word’ setting, we place special emphasis on understanding and reporting of context. In contrast to the traditional randomized clinical trial, the study setting for the implementation trial is an integral part of the study design. To address the importance of context in implementation research, Davidoff and Batalden promote the concept of situational analysis for quality improvement studies [81]. We believe that many of these principles are relevant to the implementation randomized trial. For example, published reports for implementation research should include specific details about the clinic setting, patient population, prior experience with system change, and how the context contributed to understanding the problem for which the study was designed. In addition, specialized approaches to economic evaluation provide additional important context for interpreting the results from implementation trials [172].

Because implementation research often focuses on dissemination to large populations, external validity, or generalizability, acquires special importance. One must consider how study findings are applicable to other patients, doctors, clinics, or geographic locations.

Summary

Implementation research bridges the gap between scientific knowledge and its application to daily practice with the overall purpose of improving the health of individuals and populations. To advance the science of implementation research, the Institute of Medicine published findings from the Forum on the Science of Health Care Quality Improvement and Implementation in 2007 [173] and the Veterans’ Health Administration sponsored a state-of-the-art (SOTA) conference in 2004 [3]. Together, these documents summarized current knowledge, identified barriers to implementation research, and defined strategies to overcome these barriers. Given the well-documented quality and safety problems of our health care system despite the vast resources invested in the biomedical sciences, we need to promote interest in implementation research, an emerging scientific discipline focused on improving health care for all, regardless of geography, socioeconomic status, race, or ethnicity.