One of the most critical issues in mental health services research is the gap between what is known about effective treatment and what is provided to and experienced by consumers in routine care in community practice settings. While university-based controlled studies yield a growing supply of evidence-based treatments and while payers increasingly demand evidence-based care, there is little evidence that such treatments are either adopted or successfully implemented in community settings in a timely way (Bernfeld et al. 2001; Institute of Medicine 2001; National Advisory Mental Health Council 2001; President’s New Freedom Commission on Mental Health 2003; U.S. Department of Health and Human Services 1999, 2001, 2006). Indeed new interventions are estimated to “languish” for 15–20 years before they are incorporated into usual care (Boren and Balas 1999). The implementation gap prevents our nation from reaping the benefit of billions of US tax dollars spent on research and, more important, prolongs the suffering of millions of Americans who live with mental disorders (President’s New Freedom Commission on Mental Health 2003). Ensuring that effective interventions are implemented in diverse settings and populations has been identified as a priority by NIMH Director Thomas Insel (2007).

The gap between care that is known to be effective and care that is delivered reflects, in large measure, a paucity of evidence about implementation. Most information about implementation processes relies on anecdotal evidence, case studies, or highly controlled experiments that have limited external validity (Glasgow et al. 2006) and yield few practical implications. A true science of implementation is just emerging. Because of the pressing need to accelerate our understanding of successful implementation, concerted efforts are required to advance implementation science and produce skilled implementation researchers.

This paper seeks to advance implementation science in mental health services by over viewing the emergence of implementation as an issue for research, by addressing key issues of language and conceptualization, by presenting a skeleton heuristic model for the study of implementation processes, and by identifying the implications for research and training in this emerging field.

An Emerging Science

The seminal systematic review on the diffusion of service innovations conducted by Trisha Greenhalgh et al. (2004) included a small section on implementation which was defined as “active and planned efforts to mainstream an innovation within an organization” (Greenhalgh et al. 2004, p. 582). Their review led these authors to conclude that “the evidence regarding the implementation of innovations was particularly complex and relative sparse” and that at the organizational level, the move from considering an adoption to successfully routinizing it is generally a nonlinear process characterized by multiple shocks, setbacks, and unanticipated events” (Greenhalgh et al. 2004, p. 610). They characterized the lack of knowledge about implementation and sustainability in health care organizations as “the most serious gap in the literature … uncovered” (Greenhalgh et al. 2004, p. 620) in their review.

Fortunately, there is evidence that the field of implementation science is truly emerging. In particular, the mental health services field appears to be primed to advance the science of implementation, as reflected by several initiatives. NIMH convened a 2004 meeting, “Improving the fit between mental health intervention development and service systems.” Its report underscored that “few tangible changes have occurred” in intervention implementation (National Institute of Mental Health 2004), requiring new and innovative efforts to advance the implementation knowledge and the supply of implementation researchers. The meeting revealed a rich body of theory ripe for shaping testable implementation strategies and demonstrated that diverse scholars could be assembled around the challenge of advancing implementation science. This meeting was followed by other NIMH events, including a 2005 meeting, “Improving the fit between evidence-based treatment and real world practice,” a March 2007 technical assistance workshop for investigators preparing research proposals in the areas of dissemination or implementation, and sessions and an interest group devoted to implementation research at the 2007 NIMH Services Research Conference.

Implementation research is advancing in “real time.” The NIH released the Dissemination and Implementation Program Announcement (PAR-06-039), appointed an NIMH Associate Director for dissemination and implementation research, and established a cross-NIH ad hoc review committee on these topics. NIH has funded a small number of research grants that directly address dissemination and implementation (including randomized trials of implementation strategies). More recently, the Office of Behavioral and Social Science (OBSSR) launched an annual NIH Dissemination and Implementation conference and a journal on Implementation Science was launched in 2006. While these developments are important stepping stones to the development of the field of implementation science, they reflect only the beginnings of an organized and resourced approach to bridge the gap between what we know and what we deliver.

Evolving Language for an Emerging Field

In emerging fields of study, language and constructs are typically fluid and subject to considerable discussion and debate. Implementation research is no exception. Creating “a common lexicon…of implementation…terminology” is important both for the science of implementation and for grounding new researchers in crucial conceptual distinctions (National Cancer Institute 2004). Indeed currently the development of theoretical frameworks and implementation models of change is hampered by “diverse terminology and inconsistent definition of terms such as diffusion, dissemination, knowledge transfer, uptake or utilization, adoption, and implementation” (Ellis et al. 2003).

Implementation Research Defined

Implementation research is increasingly recognized as an important component of mental health services research and as a critical element in the Institute of Medicine’s translation framework, particularly it’s Roadmap Initiative on Re-Engineering the Clinical Research Enterprise (Rosenberg 2003; Sung et al. 2003). In their plenary address to the 2005 NIMH Mental Health Services Research Conference, “Challenges of Translating Evidence-Based Treatments into Practice Contexts and Service Sectors”, Proctor and Landsverk (2005) located implementation research within the second translation step that is between treatment development and integration of efficacious treatments in local systems. The second translation step underscores the need for implementation research, distinct from efficacy and effectiveness research in outcomes, substance, and method. A number of similar definitions of implementation research are emerging (Eccles and Mittman 2006). For example, Rubenstein and Pugh (2006) propose a definition of implementation research for health services research:

Implementation research consists of scientific investigations that support movement of evidence-based, effective health care approaches (e.g., as embodied in guidelines) from the clinical knowledge base into routine use….Such investigations form the basis for health care implementation science…, a body of knowledge (that can inform)…the systematic uptake of new or underused scientific findings into the usual activities of regional and national health care and community organizations, including individual practice sites” (p. S58).

The CDC has defined implementation research as “the systematic study of how a specific set of activities and designated strategies are used to successfully integrate an evidence-based public health intervention within specific settings” (RFA-CD-07-005).

Dissemination Versus Implementation

NIH PAs on Dissemination and Implementation Research in Health distinguish between dissemination–”the targeted distribution of information and intervention materials to a specific public health or clinical practice audience” with “the intent to spread knowledge and the associated evidence-based interventions” and implementation–”the use of strategies to introduce or change evidence-based health interventions within specific settings”. The CDC makes similar distinctions. Within this framework, evidence-based practices are first developed and tested through efficacy studies and then refined and adapted through effectiveness studies (which may entail adaptation and modification to increase external validity and feasibility). Resultant findings and EBP’s are then disseminated, often passively via simple information dissemination strategies, usually with very little uptake. Considerable evidence suggests that active implementation efforts must follow, for creating evidence-based treatments does not ensure their use in practice (U.S. Department of Health and Human Services 2006). In addition to an inventory of evidence-based practices, the field needs carefully designed strategies developed through implementation research. Implementation research has begun with a growing number of observational studies to assess barriers and facilitators which are now being followed by a very small number of experimental studies to pilot test, evaluative, and refine specific implementation strategies. This research may lead to further refinement and adoption, yielding implementation “programs” that are often multi-component. These implementation programs are then ready for “spread” to other sites. We would argue (as does The Road Ahead Report; U.S. Department of Health and Human Services 2006) that implementation research in the area of mental health care is needed in a variety of settings, including specialty mental health, medical settings such as primary care where mental health is also delivered, and non-specialty settings such as criminal justice, school systems, and social services where there is increasing importation of mental health care delivery. In fact, we would also argue that a critical discussion is needed regarding whether implementation research models might differ significantly between these very different sectors or organizational platforms for mental health care delivery.

Diffusion and Translation Research

The CDC defines diffusion research as the study of factors necessary for successful adoption of evidence-based practices by stakeholders and the targeted population, resulting in widespread use (e.g., state or national) (RFA-CD-07-005). Greenhalgh et al. (2004) further distinguish between diffusion which is the passive spread of innovations, and dissemination, which involves “active and planned efforts to persuade target groups to adopt and innovation” (p. 582). Thus implementation is the final step in a series of events, characterized under the broadest umbrella of translation research that includes a wide range of complex processes (diffusion and dissemination and implementation).

Practice or Treatment Strategies Versus Implementation Strategies

Two technologies are required for evidence-based implementation: practice or treatment technology, and a distinct technology for implementing those treatments into service system settings of care. Implementation is dependent on a supply of treatment strategies. Presently a “short lists” of interventions that have met a threshold of evidence (according to varying criteria) are ready or have moved into implementation; these would include examples such as MST (Multisystemic Therapy), Assertive Community Treatment (ACT), supported employment, and chronic care management/collaborative care. Research suggests that features of the practices themselves bear upon “acceptability,” “uptake,” and “fit” or compatibility with the context for use (Cain and Mittman 2002; Isett et al. 2007). Typically issues of fidelity, adaptation, and customization arise, leading ultimately to the question, “where are the bounds of flexibility before effectiveness is compromised?”

Implementation strategies are specified activities designed to put into practice an activity or program of known dimensions (Fixsen et al. 2005). In short, they comprise deliberate and purposeful efforts to improve the uptake and sustainability of treatment interventions. Implementation strategies must deal with the contingencies of various service system or sectors (e.g., specialty mental health, medical care, and non-specialty) and practice settings, as well as the human capital challenge of staff training and support, and various properties of interventions that make them more/less amenable to implementation. They must be described in sufficient detail such that independent observers can detect the presence and strength of the specific implementation activities. Successful implementation requires that specified treatments are delivered in ways that ensure their success in the field, that is: feasibly and with fidelity, responsiveness, and sustainability (Glisson and Schoenwald 2005).

Currently, the number of identifiable evidence-based treatments clearly outstrips the number of evidence-based implementation strategies. Herschell et al. (2004) review of progress, and lack thereof, in the dissemination of EBP’s. Several groups of treatment and service developers have produced similar approaches taking an effective model to scale, but methods have been idiosyncratic, and as likely to be informed by field experience as by theory and research. Most implementation strategies remain poorly defined, can be distinguished grossly as “top down” and “bottom up,” and typically involve a “package” of strategies. These include a variety of provider decision supports, EBP-related tool kits and algorithms, practice guidelines; system and organizational interventions from management science, economic, fiscal and regulatory incentives; multi-level quality improvement strategies (e.g., Institute for Health Improvement’s Collaborative Breakthrough series, the VA QUERI program); and business strategies (e.g., Deming/Shewart Plan-Do-Check-Act Cycle). Some implementation strategies are becoming systematic, manualized and subject to empirical test, including Glisson’s ARC model and Chaffin and Aarons’ “cascading diffusion” model based on work by Chamberlain et al. (in press). The field can ill afford to continue an idiosyncratic approach to a public health issue as crucial as the research-practice gap. The Road Ahead report calls for research that can develop better understanding of mechanisms underlying successful implementation of evidence-based interventions in varying service settings and with culturally and ethnically diverse populations.

Implementation Versus Implementation Research

Implementation research comprises study of processes and strategies that move, or integrate, evidence-based effective treatments into routine use, in usual care settings. Understanding these processes is crucial for improving care, but currently this research is largely case study or anecdotal report. Systematic, empirical or robust research on implementation is just beginning to emerge, and this field requires substantial methodological development.

Implementation Research: The Need for Conceptual Models

The emerging field of implementation research requires a comprehensive conceptual model to intellectually coalesce the field and guide implementation research. This model will require language with clearly defined constructs as discussed above, a measurement model for these key constructs, and an analytic model hypothesizing links between measured constructs. Grimshaw (2007) noted at the 2007 OBSSR D & I Conference that we now have >30 definitions of dissemination and implementation and called for the development of a theory and fewer small theories to guide this emerging field. In our view, no single theory exists because the range of phenomena of interest is broad, requiring different perspectives. This paper seeks to advance the field by proposing a “skeleton” model, upon which various theories can be placed to help explain aspects of the broader phenomena.

Stage, Pipeline Models

Our developing implementation research conceptual model draws from three extant frameworks. First is the “stage pipeline” model developed by the National Cancer Institute (2004) and adapted for health services by VA’s QUERI program (Rubenstein and Pugh 2006). In the research pipeline, scientists follow a five phase plan, beginning with hypothesis development and methods development (Phase 1 and 2), continuing into controlled intervention trials (Phase 3 efficacy) and then defined population studies (Phase 4 effectiveness), and ending with demonstration and implementation (Phase 5). Here the process is considering as a linear progress with implementation as the “final” stage of intervention development (Proctor and Landsverk 2005). However Addis (2002) has reviewed the limitations of unidirectional, linear models of dissemination. The NIH Roadmap (nihroadmap.nih.gov) has challenged the research community to re-engineer the clinical research enterprise, namely to move evidence-based treatments “to bedside” into service delivery settings and communities thereby improving our nation’s health. The Roadmap has compressed the five stages into two translation steps, with the first step moving from basic science to intervention development and testing, and the second translation phase moving from intervention development to implementation in real world practice settings. However, “pipeline” models assume an unrealistic unilinear progression from efficacy to broad uptake, remaining unspecified regarding the organizational and practice contexts for these stages. Moreover, we would argue that NIH’s primary focus as indicated by resource allocation, remains the first translation step, with little specification or emphasis on the second translation.

Multi-level Models of Change

Our heuristic model further draws from Shortell’s (2004) multi-level model of “change for performance improvement”. This framework offers enormous benefit because it specifies multiple levels in the practice context that are likely to be a key to change. This model points to hierarchical levels ranging from what Greenhalgh and colleagues would characterize as the outer context (interorganizational) through the inner context (organizational) to the actual practice setting where providers and consumers interact. We posit that the four levels in the Shortell model provide contexts where concepts must be specified and addressed in implementation research as follows.

The model’s top level, the policy context, is addressed in a wide rang of disciplines. Implementation research has a long history in policy research, where most studies take a “top-down” (Van Meter and Van Horn 1975) or a “bottom-up” (Linder and Peters 1987) perspective. Legislatures mandate policies, with some form of implementation more or less assured. But policy translation into practice through corresponding regulation needs empirical study. Policy implementation research is often retrospective, using focus group or case study methodology (Conrad and Christianson 2004; Cooper and Zmud 1990; Essock et al. 2003; Herschell et al. 2004) which would argue for greater use of hypothesis driven statistical approaches for policy implementation research.

The middle two levels, “organization” and “group/team,” are informed by organizational research, with some rigorous study of topics such as business decision support systems (Alavi and Joachimsthaler 1992) and implementing environmental technology (Johnston and Linton 2000). Their themes echo those of health services: “champions” and environmental factors were associated with successful implementation (of material requirements planning) in manufacturing (Cooper and Zmud 1990). Also relevant to the organizational level are provider financial incentives to improve patient health outcomes and consumer satisfaction. Conrad and Christianson (2004) offer a well-specified graphic model of the interactions between local health care market and social environments (health plans, provider organizations, and decisions of organizations, physicians, and patients) with mathematically derived statements. Organizational level financial and market factors at the organizational level clearly affect evidence-based practice implementation in mental health services (Proctor et al. 2007). Moreover, agency organizational culture may wield the greatest influence on acceptance of empirically supported treatments and the willingness and capacity of a provider organization to implement such treatments in actual care. Indeed the organizational context of implementation, particularly where context is emphasized, reflects the most substantial deviation from linear, “pipeline” phase models from the literature emphasizing development and spread of interventions. Complexity science (Fraser and Greenhalgh 2001; Liyaker et al. 2006) aims to capture the practice landscape, while quality improvement approaches such as the IHI and QUERI models further inform implementation at the organizational level.

Finally of course, at the bottom level, the key role of individual behavior in implementation must be addressed. Individual providers have been focused upon in the sizable body of research on implementing practice guidelines in medical settings and EBP’s in mental health settings (Baydar et al. 2003; Blau 1964; Ferlie and Shortell 2001; Gray 1989; Herschell et al. 2004; Woolston 2005). Qualitative studies have documented barriers and stakeholders’ attitudes toward EBP (Baydar et al. 2003; Corrigan et al. 2001; Ferlie and Shortell 2001). Essock et al. (2003) have identified stakeholder concerns about EBP that impede implementation. The limitations of guideline literature prompted Rubenstein and Pugh (2006) to recommend that clinical guideline developers routinely incorporate implementation research findings into new guideline recommendations.

Models of Health Service Use

Models of implementation can further be informed by well known and well specified conceptual models of health services that distinguish structural characteristics, clinical care processes, and outcomes, including Aday and Andersen’s (1974) comprehensive model of access to care, Pescosolido’s “Network-Episode Model” of help-seeking behavior that has informed research on MH care utilization (Costello et al. 1998; Pescosolido 1991, 1992) and Donabedian’s (Donabedian 1980, 1988) pioneering work on quality of care (McGlynn et al. 1988). While these models do not directly address implementation, they underscore that active ingredients of strategy must be specified and linked to multiple types of outcomes, as discussed below.

A Draft Conceptual Model of Implementation Research

Informed by these three frameworks, we propose a heuristic model that posits nested levels, reflects prevailing quality improvement perspectives, and distinguishes but links key implementation processes and outcomes (Fig. 1). An outgrowth of Proctor & Landsverk’s plenary address at the 2005 NIMH services research meeting, the model distinguishes two required “core technology” or strategies: evidence-based intervention strategies and separate strategies for implementing those interventions in usual care. It also provides for classification of multi-level implementation strategies, drawing on Fig. 2). The model accommodates theories of dissemination (Torrey et al. 2001), transportability (Addis 2002; Hohmann and Shear 2002), implementation (Beutler et al. 1995), diffusion of innovation [posited most prominently by the seminal work of Rogers (1995) as a social process], and literatures that have been reviewed extensively and synthesized (Glasgow et al. 2001; Greenhalgh et al. 2004; Proctor 2004). Indeed some implementation strategies (distinct from empirically based treatments) emerge for facilitating the transport and implementation of evidence-based medical (Clarke 1999; Garland et al. 2001), substance abuse (Backer et al. 1995; Brown et al. 1995; Brown et al. 2000), and mental health (Blasé et al. 2005) treatments. While some heuristics (Ferlie and Shortell 2001; Hohmann and Shear 2002; Schoenwald and Hoagwood 2001) for transportability, implementation, and dissemination have been posited (Brown and Flynn 2002; Chambers et al. 2005), this literature is too often considered from a sole-disciplinary perspective (e.g., organizational, or economic, or psychological), and has not “placed” key variables within levels. Nor has it distinguished types of outcomes. Our draft model illustrates three distinct but interrelated types of outcomes–implementation, service, and client outcomes—that are geared to constructs from the four level models (Committee on Crossing the Quality Chasm: Adaption to Mental Health, Addictive Disorders 2006; Institute of Medicine 2001). Furthermore, this model informs methodology, which long has plagued diffusion research (Beutler et al. 1995; McGlynn et al. 1988; Rogers 1995). Systematic studies of implementation require creative multi-level designs to address the challenges of sample size estimation; by definition, larger system levels carry sample sizes with lower potential power estimates than do individual level analyses. The model requires involvement of multiple stakeholders at multiple levels.

Fig. 1
figure 1

Conceptual model of implementation research

Fig. 2
figure 2

Levels of change

Yet to be discovered is whether one comprehensive implementation model may emerge, or different models reflecting specific clinical conditions, treatment types (psycho-social vs. pharmacological, or staged interventions), or service delivery settings (specialty mental health vs. primary care vs. non-medical sectors such as child welfare, juvenile justice, geriatric, homeless services). The relationship between the first column, evidence-based practices, and the second column, implementation strategies, needs to be empirically tested, particularly in light of recent evidence that different evidence-based practices carry distinct implementation challenges (Isett et al. 2007).

Implications for Research and Training

Advancing the field of implementation science has important implications. This paper identifies and briefly discusses three issues: the methodological issues in studying implementation processes, who should conduct this important research, and the need to train for this emergent field.

The Challenge of Implementation Research Methods

The National Institute of Mental Health (2004) report on Advancing the Science of Implementation; calls for advances in the articulation of constructs relevant to implementation, converting constructs into researchable questions, and advancing the measurement of constructs key to implementation research. Given its inherent multi-level nature as demonstrated in the prior section, the advancement of implementation research requires attention to a number of formidable design and measurement challenges. While a detailed analysis of the methodological challenges in IR is beyond the scope of this paper, two of the major issues will be briefly identified, namely measurement and design.

Regarding the measurement challenge, the key processes involved—both EST’s and implementation processes—must be modeled, measured, and their fidelity assessed. Moreover, researchers must conceptualize and measure the distinct intervention outcomes and implementation outcomes (Fixsen et al. 2005). Improvements in consumer well-being provide the most important criteria for evaluating both treatment and implementation strategies—the particular individuals who received treatments in the case of treatment research and the pool of individuals served by the providing system in the case of implementation research. But implementation research requires outcomes that are conceptually and empirically distinct from those of service and treatment effectiveness. These include the intervention’s penetration within a target organization, its acceptability to and adoption by multiple stakeholders, the feasibility of its use, and its sustainability over time within the service system setting. The measurement challenge for intervention processes and outcomes requires that measures developed for the conduct of efficacy trials must be adapted and tested for feasible and efficient use in ongoing service systems. It is unlikely that the extensive and data rich batteries of measures developed for efficacy studies, including those developed for efficacy tests of organizational interventions, will be appropriate or feasible for implementation in services systems. Thus researchers need to find ways to shorten measurement tools, recalibrate “laboratory” versions, and link adapted measures to the outcomes monitored through service system administrative data.

In the area of design, studying EBP implementation in even one service system or organization is conceptually and logistically ambitious, given multiple stakeholders and levels of influence. Yet even complex studies have inherently limited sample size, so implementation research is typically beset by a “small n” problem. Moreover, to capture the multiple levels affecting implementation, researchers must employ multi-level designs and corresponding methods for statistical analysis. Other challenges included modeling and analyzing relationships among variables at multiple levels, and costing both interventions and implementation.

Thus the maturation of implementation science requires a number of methodological advances. Most early research on implementation, especially that on the diffusion of innovation, has employed naturalistic case study approaches. Only recently have prospective, experimental, designs been introduced and the methodological issues identified here begun to be systematically addressed.

Who Should Conduct Implementation Research? A Conjoining of Perspectives

Implementation research, whether health (Rubenstein and Pugh 2006) or mental health, is necessarily multi-disciplinary and requires a convergence of perspectives. To tackle the challenges of implementation, Bammer (2003) calls for collaboration and integration both within and outside the research sphere. Researchers must work together across boundaries, for no one research tradition alone can address the fundamental issue of public health impact (Stetler et al. 2006). Proctor and Landsverk (2005) urged treatment developers and mental health services researchers to partner for purposes of advancing research on implementation, as did Gonzales et al. (2002), who call for truly collaborative, innovative and interdisciplinary work to overcome implementation and dissemination obstacles. Implementation research requires a partnership of treatment developers, service system researchers, and quality improvement researchers. Yet their perspectives will not be sufficient. They need to be joined by experts from field such as economics and business and management. Collaboration is needed between treatment developers who bring expertise in their programs, mental health services researchers who bring expertise in service settings, and quality improvement researchers who bring conceptual frameworks and methodological expertise for the multi-level strategies required to change systems, organizations, and providers.

Because implementation research necessarily occurs in the “real world” of community based settings of care, implementation researchers also must partner with community stakeholders. National policy directives from NIMH, CDC, IOM, and AHRQ (Institute of Medicine 2001; National Advisory Mental Health Council, 2001; Trickett and Ryerson Espino 2004; US Department of Health and Human Services 2006) urge researchers to work closely with consumers, practitioners, policy makers, payers, and administrators around the implementation of evidence-based practices. The recent NIMH Workgroup Report, “The Road Ahead: Research Partnerships to Transform Services,” asserts that truly collaborative and sustainable partnerships can significantly improve the public health impact of research (US Department of Health and Human Services 2006). Successful collaboration demands “transactional” models in which all stakeholders equally contribute to and gain from the collaboration and where cultural exchange is encouraged. Such collaborations can move beyond traditional, unidirectional models of “diffusion” of research from universities to practice, to a more reciprocal, interactive “fusing” of science and practice (Beutler et al. 1995; Glasgow et al. 2001; Hohmann and Shear 2002). Implementation research is an inherently collaborative form of inquiry in which researchers, practitioners, and consumers must leverage their different perspectives and competencies to produce new knowledge about a complex process.

Knowledge of partnered research is evolving, stimulated in part by network development cores to NIMH advanced centers, by research infrastructure support programs, and by reports such as the NIMH Road Ahead report. Yet the partnership literature remains largely anecdotal, case study, or theoretical, with collaboration and partnership broadly defined ideals; “there is more theology than conclusion, more dogma than data” (Trickett et al. 2004, p. 62) and there are few clearly articulated models to build upon. Recent notable advances in the mental health field include Sullivan et al. (2005) innovative mental health clinical partnership program within the Veterans Healthcare Administration, designed to enhance the research capacity of clinicians and the clinical collaborative skills of researchers. Evaluation approaches, adapted from public health participatory research (Naylor et al. 2002), are emerging to systematically examine each partnership process and the extent to which the equitable participatory goals are achieved. Wells et al. (2004) also advocate for and work to advance Community-Based Participatory Research (CBPR) in mental health services research. McKay (in press) models collaboration between researchers and community stakeholders, highlighting various collaborative opportunities and sequences across the research process. Borkovec (2004) cogently argues for developing Practice Research Networks, providing an infrastructure for practice-based research and more efficient integration of research and practice. Community psychology, prevention science, and public health literatures also provide guidance for researcher-agency partnerships and strategies for collaboratives that involve CBO representatives, community stakeholders, academic researchers, and service providers (Israel et al. 1998; Trickett and Ryerson Espino 2004). Partnerships between intervention and services researchers, policy makers, administrators, providers, and consumers hold great promise for bridging the oft-cited gap between research and practice. Implementation research requires unique understanding and use of such partnerships.

Training: Building Human Capital for Implementation Research

No single university-based discipline or department is “home” to implementation science. Nor does any current NIMH-funded pre- or post-doctoral research training program (e.g., T32) explicitly focus on preparing new researchers for implementation research. The absence of organized programs of research and training on implementation research underscores the importance of training in this field. The Bridging Science and Services Report (National Institute of Mental Health 1999) encourages the use of NIMH-funded research centers as training sites, for research centers are information-rich environments that demand continual, intensive learning and high levels of productivity from their members (Ott 1989), attract talented investigators with convergent and complementary interests, and thus provide ideal environments for training in emerging fields such as implementation science (Proctor 1996). The developmental status of implementation science underscores the urgency of advancing the human capital, as well as intellectual capital, for this important field.

Conclusion

In a now classic series of articles in Psychiatric Services, a blue-ribbon panel of authors reviewed the considerable evidence on the effectiveness and cost saving of several mental health interventions. In stark contrast to the evidence about treatments, these authors could find “no research specifically on methods for implementing” these treatments (Phillips et al. 2001, p. 775), nor any proven implementation strategies (Drake et al. 2001). Unfortunately, 7 years later, implementation science remains embryonic. Members of an international planning group that recently launched the journal Implementation Science concur that systematic, pro-active efforts are required to advance the field of implementation science, to establish innovative training programs, to encourage and support current implementation researchers, and to recruit and prepare a new generation of researchers focused specifically on implementation. Ultimately, implementation science holds promise to reduce the gap between evidence-based practices and their availability in usual care, and thus contribute to sustainable service improvements for persons with mental disorder. We anticipate that the next decade of mental health services research will require, and be advanced by, the scientific advances associated with implementation science.