Introduction

The last half-century has seen a scientific revolution in how we address and promote children’s mental health. Advances in theory, research, and methodology have led to the development, evaluation, and subsequent identification of a number of programs and practices shown to effectively improve mental health and related behavioral outcomes in children and youth (Greenberg et al. 2001). With the emergence of this evidence base has come a concomitant shift in policy and funding to promote the use of evidence-based programs and practices (EBPs). However, we have yet to realize broad public health impact (at the population level) from this movement because the barriers of widespread adoption, high-quality implementation and fidelity, and sustainability of EBPs remain (Bumbarger et al. 2010).

To begin to address these barriers, the focus of prevention and intervention research has recently turned from developing and testing interventions (referred to as Type 1 translational research) to studying the process of moving EBPs to scale in such as way as to affect population-level outcomes (Type 2 translation; Rohrbach et al. 2006). An important aspect of this process of going to scale with EBPs involves the interactions (and potentially partnership) among researchers, practitioners, and policy makers/funders. Studying and understanding these relationships and interactions is at the very heart of Type 2 translational research. Further, because prevention and intervention for children’s mental health is organized (and primarily funded) through state-level systems, state agencies are increasingly seen as critical stakeholders in this process (Bruns et al. 2008). These agencies are in a unique position to influence policy and practice, and to leverage resources to move the needle on children’s mental health indicators and outcomes. Still there is little research to guide states in effectively moving science into practice on a large scale, and the professions of research, policy, and practice continue to operate as disconnected silos to a great extent.

In Pennsylvania, an organic partnership between the Prevention Research Center at Penn State University (PRC) and the Pennsylvania Commission on Crime and Delinquency (PCCD, the state criminal and juvenile justice planning agency) has evolved over the last decade, with the goals of promoting the widespread dissemination of EBPs and supporting their effective implementation and sustainability through proactive technical assistance. This collaboration serves as a case study in the potential of such a research-policy partnership, which in combination with the emerging body of research related to community coalitions and community–university partnerships can better inform similar collaborations at the state level. Building upon the Interactive Systems Framework (Wandersman et al. 2008) as a conceptual model for understanding such partnerships, we expand upon the role of policy makers/funders in bringing EBPs to scale, while highlighting the critical role that state-level prevention support systems play in linking researchers, practitioners, and policy makers/funders; maximizing opportunities for the study of the large-scale EBP implementation under non-research conditions; and effecting continuous quality improvement. Lessons learned through the PRC–PCCD partnership provide considerations for those embarking on similar partnerships and indicate areas where further work is needed.

History and Evolution of the Partnership

In 1998 the Pennsylvania Commission on Crime and Delinquency (PCCD) approached the Penn State Prevention Research Center (PRC) to conduct a process study of the Communities That Care initiative (Hawkins et al. 2002) that had been piloted in a handful of communities over the previous 3 years. The Communities That Care process introduced a public health approach and represented a new model for the way communities and service systems could address youth mental health and behavior problems. As such PCCD was interested in knowing whether local practitioners and decision makers could embrace such a significant paradigm shift and implement the model with fidelity. The PRC’s mixed-methods study reached two important conclusions (Feinberg et al. 2002):

  1. 1.

    Although the concept of multi-agency collaboration had long been embraced by funders and policy makers, creating and maintaining well-functioning community prevention coalitions was very challenging and communities had little guidance or expertise on which to draw; and

  2. 2.

    Communities were able to adequately follow the initial steps of the Communities That Care process, collecting local epidemiological data on risk and protective factors and using the data to establish prevention priorities, but they struggled with the next step of selecting and implementing research-based strategies to address their targeted priorities.

The PRC presented these findings to PCCD, along with recommendations to address these barriers and strengthen the initiative. In response PCCD (1) awarded a contract to the Center for Juvenile Justice Training and Research to create a technical assistance center to support Communities That Care coalitions throughout Pennsylvania, and (2) established a second state-funded initiative to provide grants for the adoption and implementation of a menu of specific research-based prevention and intervention programs targeting children and youth. Through these two initiatives (Communities That Care and EBP) over the course of the subsequent decade more than 100 community prevention coalitions have been created and grants distributed to fund nearly 200 replications of EBPs throughout the state (Meyer-Chilenski et al. 2007). Recent studies have shown that in communities where this combination of Communities That Care coalitions and EBPs has been adopted, population-level rates of delinquency and substance abuse are significantly lower, and adolescent developmental trajectories show sustained improvement over time (Brown et al. 2010; Feinberg et al. 2010). Further, counties where these EBPs have been adopted have seen lower rates of costly out-of-home placement of delinquent and dependent youth (Bumbarger et al. 2010). However, these successes did not come immediately or without challenges.

Many of the initial sites funded to implement EBPs struggled with training, startup, model fidelity, and sustaining the programs beyond the period of PCCD seed-funding. In the early years of the initiative, PCCD provided for limited reactive technical assistance when such problems arose, but quickly recognized that supporting community coalitions through training and technical assistance and providing funding for the implementation of EBPs would not impact population-level outcomes unless the barriers of implementation quality and sustainability were addressed in a more structured and proactive way (Rhoades et al., under review). PCCD also recognized that with the growing movement in support of EBPs, other state agencies were also funding some of these same programs within their separate systems. As a result, in 2008 PCCD partnered with the Pennsylvania Department of Public Welfare (specifically the Office of Mental Health and Substance Abuse Services, and Office of Children, Youth, and Families) to support the development of the Evidence-based Prevention and Intervention Support Center (EPISCenter; see www.episcenter.psu.edu). The goals of the EPISCenter, which functions as a unit within the PRC, are to promote the greater use and support of evidence-based prevention and intervention throughout Pennsylvania, provide technical assistance to communities and providers implementing EBPs, and to conduct and disseminate translational research to increase the knowledge base.

Approach to Collaboration (Structure, Funding, and Theory)

The EPISCenter is overseen by a multi-agency steering committee (the Resource Center Steering Committee for Evidence-based Programs and Practices) that includes representation from the primary funders: the Pennsylvania Commission on Crime and Delinquency (PCCD) and the Department of Public Welfare, as well as the state Departments of Education and Health (including the Bureau of Drug and Alcohol Programs), and the Juvenile Court Judges Commission (a cabinet-level state agency with responsibility for the juvenile court and probation). This multi-agency oversight is important because it represents essentially all of the sources of prevention and intervention funding that impact local communities. The Resource Center Steering Committee also provides oversight to a related initiative operated through the National Center for Juvenile Justice, aimed at identifying promising juvenile justice programs and practices that haven’t yet been rigorously evaluated. Funding for both efforts comes jointly from PCCD and the Department of Public Welfare through a combination of state budget and federal pass-through funds. The Resource Center Steering Committee meets quarterly with the EPISCenter and the National Center for Juvenile Justice to establish priorities, review progress, and discuss new data on the implementation and impact of EBPs.

The partnership between the EPISCenter, the state, and local practitioners is guided by a conceptual model that expands upon the Interactive Systems Framework (ISF; Wandersman et al. 2008). The ISF expands on the Institute of Medicine’s Research-to-Practice model (Mrazek and Haggerty 1994) by explicating the stage at which effective interventions are taken to scale and ultimately delivered to consumers—identifying the systems vital to large-scale dissemination, implementation, and sustainability of EBPs (Glasgow et al. 2003; Rohrbach et al. 2006; Wandersman et al. 2008). Specifically, the ISF differentiates the Prevention Synthesis and Translation System, the Prevention Delivery System, and the Prevention Support System (Fig. 1).

Fig. 1
figure 1

The interactive systems framework (Wandersman et al. 2008)

The Prevention Synthesis and Translation System is responsible for conducting basic prevention science research, synthesizing this knowledge into interventions, and conducting and disseminating translational research (Rohrbach et al. 2006). The Prevention Delivery System represents the practitioners and providers ultimately responsible for delivering prevention and intervention services to consumers (children and families). The Prevention Support System serves as the “bridge” between the Prevention Synthesis and Translation System and the Prevention Delivery System, providing support to and connecting researchers to practitioners to facilitate bi-directional knowledge transfer (Chinman et al. 2008; Flaspohler et al. 2008a; Hunter et al. 2009). In addition, each of these systems operates within an important macro-system of policy and funding. In the adaptation of the ISF that guides our partnerships (Fig. 2) we clarify the role of the policy and funding macro-system as a key, active partner in these processes (Bruns et al. 2008). The introduction of the ISF has been an important theoretical model to guide our partnership and has helped us conceptualize the systems and interactions that are necessary but in practice are not always present or well-connected (Hallfors et al. 2007; Midgley 2009). In particular, prior to the ISF the Prevention Support System had not been specifically promoted as a key system for connecting science to policy and practice, and such a prevention support system is often completely absent (as was the case in Pennsylvania prior to the establishment of the EPISCenter). The absence of a Prevention Support System can result in EBPs being promoted or mandated but not adequately supported, and policy and funding decisions occurring without input from prevention researchers and practitioners (Emshoff 2008).

Fig. 2
figure 2

Placing the EPISCenter conceptual model within the interactive systems framework (Rhoades et al., under review)

Within our expanded ISF model the points of interaction represented by the bi-directional arrows signify the opportunities for partnership between the stakeholders. When such a Prevention Support System is present and well connected to the other stakeholders it can create a strong partnership among policy makers/funders, researchers, and practitioners, increasing the capacity of each and addressing the key barriers to improved children’s mental health (Livet et al. 2008; Wilkinson et al. 2009).

The Prevention Support System links prevention researchers directly to the funding and policy macro-system, while simultaneously providing opportunities to support EBPs and study the issues related to taking EBPs to scale under real-world conditions (Bumbarger and Perkins 2008). Beyond strengthening each of the systems individually, the Prevention Support System serves a connecting function essential for establishing the feedback loops that lead to continuous quality improvement (Bickman 2008). Within this partnership context the EPISCenter facilitates capacity building by conducting and translating research on key prevention topics and then using this research to inform the technical assistance and guidance provided to EBP practitioners and policy-makers. These research topics have included barriers to dissemination, implementation, and fidelity of EBPs; the causes for and nature of adaptation; the characteristics that predict sustainability; the predictors of community coalitions’ effectiveness at supporting the implementation of EBPs; and the overall public health impact of this infrastructure on population level youth outcomes (Brown et al. 2010; Bumbarger and Perkins 2008; Dariotis et al. 2008; Feinberg et al. 2010; Tibbits et al. 2010).

Collectively, this body of research addresses timely issues for practitioners and policy makers and subsequently informs our view of the capacity necessary to take EBPs to scale (Domitrovich et al. 2008; Greenberg 2004). Perhaps most importantly it represents a partnership that has mutually benefitted all three stakeholder groups: practitioners receive well-informed technical assistance to address key barriers to EBP implementation and sustainability; state agency policy makers and funders get timely feedback to continuously improve their initiatives and are better able to demonstrate the quality and impact of their management of taxpayer funds; and researchers get access to a unique large-scale test bed for translational research and the opportunity to have their research result in timely and meaningful impact on the field.

Our decade-long partnership with Pennsylvania communities and policy makers has been incredibly rewarding, and at times incredibly challenging. Beyond increasing the knowledge base of moving science to practice, this collaboration has left us with a rich body of very practical knowledge in the important art of partnership itself. Although the partnership described here grew from PCCD approaching the PRC with a specific research/evaluation need, it could just as likely have been the reverse, and we believe the lessons in effective partnership are broadly generalizable. Below we have organized these lessons learned into a group of overarching themes that may be helpful as others embark on similar partnerships.

Lessons Learned

Although partnership—be it through coalitions, collaborations, or community-based participatory research—is universally seen as a more effective and efficient way to work toward better children’s mental health outcomes, partnership alone is not a panacea nor is it easy. It is in fact its own emerging science, and very challenging to apply effectively. Substantial science related to community coalitions and community–university partnerships has emerged in recent years, identifying some of the predictors of partnership success and sustainability (Arthur et al. 2010; Brown et al. 2010; Greenberg et al. 2007; Nowell 2009; Rolleri et al. 2008). This emerging knowledge base, coupled with case studies such as the Pennsylvania collaboration presented here, can better inform research–policy partnerships at the state level. In our partnership journey in Pennsylvania over the last decade we have learned a number of valuable lessons.

Applied Science Must be Applied Differently

The science that takes place in the context of a research–policy partnership must necessarily be different than science primarily for the sake of empirical inquiry and knowledge-building. While the latter may take decades and involve careful development and experimental testing of theory, the former must be rapid, relevant and accessible. The questions addressed cannot come primarily from academic curiosities but from the immediate needs and priorities of practitioners and policymakers. While this may seem obvious, consider for example the policy relevance of research on cost-benefit ratios of various children’s mental health interventions versus research on the genetic correlates of childhood mental illness. A quick PsycINFO review of the English-language, child and adolescent literature for “children’s mental health AND cost-benefit” produces only five citations, while “genetic AND children’s mental health” produces sixty-two. The point here is that although science should inform policy and practice, the research literature does not necessarily lend itself to the immediate needs of practitioners and policy-makers.

Likewise, such a research–policy partnership must be conducted as applied research (i.e., it should be conducted with an eye toward drawing firm conclusions and practical recommendations for policy and practice). Unlike conventional research, which often results in more caveats and calls for further study than conclusions, science that takes place within the context of such policy partnership must end with clear and unequivocal answers and recommended actions. This may mean the researcher is “forced” to advocate a course of action, and when the evidence is mixed or seems ambiguous the researcher must consider the preponderance of that evidence and “choose a side”. We have also found it useful when working with practitioners and policy makers to speak of “evidence” as representing a place along a “continuum of confidence” as opposed to being present or absent. Rather than focusing on whether or not a particular program or practice is “on the list” we encourage them to consider “how confident can I be that this program or practice will result in the kinds of outcomes I’m concerned with?

Finally, the research findings and subsequent recommendations must be communicated in media common to and accessible by practitioners and policy makers. The currency of this research is one-page fact sheets with a few bullet points and plenty of white space; manuscripts must be transformed into PowerPoint presentations, 2- to 4-page research briefs and talking points, and 3-min YouTube videos. In this regard, the emerging use of knowledge visualization and info-graphics becomes increasingly important as our access to information and data exponentially increases and our ability to effectively manage and make practical use of that data and information struggles to keep up (Zupan et al. 2006).

Recognize What Motivates Behavior Change

Much of the success in the dissemination of EBPs has come through mandates and grant opportunities. When practitioners and communities adopt EBPs primarily because of the extrinsic motivation of funding or to comply with policy requirements, they are less likely to commit to quality implementation and less likely to sustain these efforts after seed funding ends. Consider that policy makers and practitioners in children’s mental health operate in systems that have been in place for over 100 years in the United States, and that the emerging evidence-base informing our knowledge of “what works” is less than half that age. In the absence of empirical evidence, these systems evolved based on faith and good intentions. It is challenging then to convince practitioners and policy makers to embrace an empirical yardstick, especially when the tools and resources to apply that yardstick are lacking and when the extrinsic motivators of politics and funding seem to push them toward different goals. Because EBPs are often more complex, costly, and time- and resource-intensive than simpler and less costly alternatives, the intrinsic desire to improve youth outcomes is often in conflict with the realities of agency management, budgets, and practitioner support, because the barriers to quality implementation and sustainability outweigh the potential benefits. The result is that providers (and policy makers) can simultaneously embrace and ignore evidence-based practice.

To address this inherent conflict we have found it valuable to promote within practitioners and policy makers the desire to achieve measurable improvement in children’s mental health, and the expectation that such measureable improvement can in fact be accomplished. In our experience we have found that the most effective way to create such intrinsic motivation to embrace EBPs has been by (1) correcting the imbalance of barriers and rewards; and (2) creating the capacity to demonstrate that the EBPs are in fact resulting in improved outcomes for youth, reduced burden on systems, or other effects that are timely and relevant to the stakeholders involved. Even in this new era of accountability, it is still not the norm for practitioners or state-level stakeholders to be able to demonstrate meaningful impacts from programs and services, using reliable metrics that associate particular outcomes with specific interventions or practices. In this case helping practitioners (and the policy makers who fund them) to reliably measure and communicate program impact is a win–win partnership for researchers interested in diffusion of innovation.

Correcting the above mentioned imbalance of barriers and rewards (with the goal of increasing intrinsic motivation) involves purposefully identifying and addressing the challenges to adopting and implementing EBPs with quality and fidelity, and sustaining them long term to affect successive cohorts. This requires (1) a meaningful and stable infrastructure; (2) local epidemiological data that can be used to more efficiently and effectively focus resource allocation; (3) thoughtful selection of programs and practices that target prioritized risk factors, including careful consideration of their evidence of effectiveness as well as local fit and feasibility; (4) adequate organizational readiness for initial startup, including staff skills, resources, and buy-in; (5) adequate initial training not only in the mechanics of delivering the intervention or utilizing the practice, but also in a deep understanding of its logic model and how it ultimately effects behavioral change; (6) the tools, processes, and capacity for ongoing collection, analysis, and feedback from data on both implementation and (proximal and distal) outcomes, including clinical supervision, coaching/mentoring, and technical assistance; and (7) adequate sustainability planning to effectively move successful programs and practices from innovation to common organizational practice.

These seven prerequisites are often inadequately addressed by policymakers and funders, and subsequently overwhelm providers and practitioners when they attempt to adopt EBPs. Fortunately, we have found that as prevention researchers connected to both practitioners and policy makers we are well-positioned to help address these specific needs. Across multiple replications of EBPs many common barriers emerged (i.e., most of these challenges are not unique to a particular community), making it efficient to address them at scale through training and technical assistance or through policy changes. By helping practitioners overcome these barriers, and assisting them in documenting EBP impacts, researchers can create the intrinsic desire in both practitioners and policy makers to adopt a results-based organizational philosophy (Ganju 2006; Huang et al. 2003). Table 1 provides examples of some of the resources developed and policies enacted to address these barriers.

Table 1 Resources created and policies adopted to address identified needs of EBP implementation

Partnership Requires Balance and Compromise

An important (and progressive) aspect of Pennsylvania’s initiative and its subsequent success in improving youth outcomes has been the balanced approach the Pennsylvania Commission on Crime and Delinquency (PCCD) has taken toward promoting both evidence-based practice and “practice-based evidence” (Chagnon et al. 2010; Hage et al. 2007; Massatti et al. 2008; Midgley 2009; Wells et al. 2004). While PCCD’s initiative recognizes the value in promoting empirically-validated interventions and practices, it also acknowledges the prospect (and potential value) of programs and practices that already have widespread support and uptake but haven’t undergone any rigorous experimental evaluation. Without diminishing the integrity of the science behind EBPs, PCCD is working with the National Center for Juvenile Justice to identify a small number of these “locally-grown” innovations that are most likely to stand the test of empirical trial. After a period of self-assessment guided by a set of established criteria (focusing on evidence of efficacy, fit, and feasibility for dissemination), agencies or providers submit their “best practices” to a state panel of researchers and policy makers for independent assessment and consideration. The process itself helps bring science to the field while also facilitating the organic emergence of this “wisdom guided by knowledge”. Aside from being a rational acknowledgement of the limitations of the current lists of EBPs (Midgley, 2009), this process recognizes the knowledge and expertise of local practitioners and represents an excellent example of the compromise often inherent in good partnership. It stands in contrast to the linear, top-down process of knowledge transfer that often results in a blind promotion of EBP lists and confuses the goal of diffusion of innovation with the goal of improved children’s mental health.

Effective Partnerships Pursue Shared Objectives (and Respect Goals That Aren’t Shared)

At the foundation of good partnership is the opportunity and capacity for each partner to help meet the needs and achieve the goals of the other, while simultaneously meeting their own needs and pursuing their own goals (Daniels and Sandler 2008; Macaulay and Nutting 2006; Nowell 2009; Schensul 2009). As in a Venn diagram, the needs and goals of each partner are rarely exactly the same but often have some overlap that represents the opportunity for partnership. An additional but often overlooked aspect of successful partnership however is the recognition of, and respect for, the partner’s needs and goals that do NOT overlap with our own. For example, as researchers we have the need to publish, to use valid and reliable measures, to examine mediators and moderators beyond main effects, and to comply with Institutional Review Board procedures. Policy makers and funders may have compliance or political needs, such as the need to have all of their funds encumbered by the end of a fiscal year, or to distribute resources evenly across regions or communities (or to support specific communities regardless of their capacity or readiness). Providers have the need to preserve their workforce and to maintain positive relationships with key stakeholders. The greatest challenge to successful partnership is effectively communicating, understanding and honoring each partner’s own hierarchy of needs and actively building the recognition of those needs into project planning. In our technical assistance with community coalitions, we promote this process through an activity called stakeholder mapping (Ostrom et al. 1995), where each stakeholder lists his or her organization’s goals and priorities (including sometimes unofficial or unspoken priorities), as well as their organizations’ strengths and needs. The group then identifies and prioritizes shared goals and priorities, and matches strengths with needs. In the process, the unshared needs and goals are also acknowledged.

Summary and Future Directions

A scientific revolution in the last half-century has created great potential for achieving significant population-level improvement in children’s mental health and the prevention of youth problem behaviors through the use of evidence-based programs and practices. However, to attain such a lofty goal we need to realize sustained, large-scale implementation of these EBPs with sufficient quality and fidelity, and this will require strong partnerships among researchers, policymakers, and practitioners organized at the state level. The state agency–university partnership between PCCD and the PRC that has taken place in Pennsylvania over the last decade serves as a case study for such a sustained and mutually-beneficial enterprise, and provides important lessons for effective collaboration.

Although there has been considerable research recently on the characteristics and predictors of successful collaboration at the community level, the facilitators and barriers of EBP implementation and sustainability, and the influence of research on policy, the fields of research, practice, and policy in children’s mental health remain significantly disconnected. This type of research–policy partnership, particularly at the state level, is an important area for future study. In particular, the potential relevance of the current knowledge base from community collaboration, community-based participatory research, and community–university partnerships to state-level systems is unknown. An emerging body of research on when and how research influences policy will also make important contributions in this area.

Likewise, there are lessons and practices within community-level planning that can be applied to such state-level research–policy partnerships. The PCCD initiative has resulted in nearly 200 replications of a diverse menu of EBPs, but we must continually remind ourselves that dissemination of EBPs is not the goal, but a means to an end (e.g., improved behavioral and health outcomes for youth). Similarly from a provider perspective the goal is not simply to get an EBP grant from the state to keep a program operating or keep therapists’ salaries covered. When partners with not-perfectly-overlapping needs and goals move down the path of collaborative action there is a natural tendency to experience “mission drift”, either confusing means and ends as described above or simply mistaking action for progress. Just as we teach communities to use logic models and careful program planning to articulate the flow of inputs, outputs and outcomes and to establish benchmarks for progress, partnerships between researchers and policy makers or practitioners can likewise gain from such careful planning. Especially at this macro-level, human and social services could benefit from outcome-focused models such as Getting To Outcomes (Wandersman et al. 2000) or Results-based Accountability (Friedman 1997, 2005), or private-sector business management models such as Six Sigma (Tennant 2001).

Perhaps the most critical need for future research is the development and utilization of sophisticated data management platforms. There are few examples of an institutionalized system for providing coaching and support for implementers based on real-time feedback from implementation monitoring data. Nurse Family Partnership (Olds et al. 2007) and Multisystemic Therapy (Schoenwald et al. 2008) are notable exceptions, though their data systems are primarily designed to support training infrastructure. No such system exists for any universal prevention programs or for serving multiple EBPs and stakeholders. Frequently, prevention funding has requirements for reporting program impact in some fashion. However, the outcomes reported are often gross measures whose primary purpose is simply to meet reporting requirements and ensure accountability for the distribution of grant funds, rather than tools for program monitoring and improvement (Bickman 2008; Daniels and Sandler 2008; Johnson et al. 2004). This problem is amplified when an agency’s funding efforts support a broad range of EBPs that represent diverse target populations, risk and protective factors and behavioral outcomes, all of which the funder must somehow aggregate to demonstrate the impact of the initiative as a whole (Elliott and Mihalic 2004; Meyer-Chilenski et al. 2007). This challenge is becoming more common as policy makers promote a menu of EBPs rather than an individual preventive intervention for which a well-defined system of assessment can be tailored (Hallfors et al. 2007; Hawkins et al. 2002). Finally, in typical “accountability” systems data are generally reported by calendar quarter or annually, an arbitrary and static increment that often has little bearing on the practical operation of the program and ignores the important “stages” of program adoption, implementation, and sustainability identified by Rogers and others (Bickman 2008; Rogers 1995). A robust body of research is needed to create dynamic data collection and management systems developed with the needs of all three stakeholder groups (researchers, providers, and funders) in mind, and with the capacity to provide real-time feedback to inform resource allocation and continuous quality improvement. In the context of research–policy partnerships there is great potential for such “dashboard” systems to simultaneously improve research, policy, and practice and to strengthen these partnerships at the state level by mutually supporting the needs of each stakeholder.