Abstract
This article describes a decade-long partnership between the Prevention Research Center at Penn State and the Pennsylvania Commission on Crime and Delinquency. This partnership has evolved into a multi-agency initiative supporting the implementation of nearly 200 replications of evidence-based prevention and intervention programs, and a series of studies indicating a significant and sustained impact on youth outcomes and more efficient utilization of system resources. We describe how the collaboration has developed into a sophisticated prevention support infrastructure, discuss the partnership and policy lessons learned throughout this journey, and identify remaining issues in promoting this type of research–policy partnership.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
The last half-century has seen a scientific revolution in how we address and promote children’s mental health. Advances in theory, research, and methodology have led to the development, evaluation, and subsequent identification of a number of programs and practices shown to effectively improve mental health and related behavioral outcomes in children and youth (Greenberg et al. 2001). With the emergence of this evidence base has come a concomitant shift in policy and funding to promote the use of evidence-based programs and practices (EBPs). However, we have yet to realize broad public health impact (at the population level) from this movement because the barriers of widespread adoption, high-quality implementation and fidelity, and sustainability of EBPs remain (Bumbarger et al. 2010).
To begin to address these barriers, the focus of prevention and intervention research has recently turned from developing and testing interventions (referred to as Type 1 translational research) to studying the process of moving EBPs to scale in such as way as to affect population-level outcomes (Type 2 translation; Rohrbach et al. 2006). An important aspect of this process of going to scale with EBPs involves the interactions (and potentially partnership) among researchers, practitioners, and policy makers/funders. Studying and understanding these relationships and interactions is at the very heart of Type 2 translational research. Further, because prevention and intervention for children’s mental health is organized (and primarily funded) through state-level systems, state agencies are increasingly seen as critical stakeholders in this process (Bruns et al. 2008). These agencies are in a unique position to influence policy and practice, and to leverage resources to move the needle on children’s mental health indicators and outcomes. Still there is little research to guide states in effectively moving science into practice on a large scale, and the professions of research, policy, and practice continue to operate as disconnected silos to a great extent.
In Pennsylvania, an organic partnership between the Prevention Research Center at Penn State University (PRC) and the Pennsylvania Commission on Crime and Delinquency (PCCD, the state criminal and juvenile justice planning agency) has evolved over the last decade, with the goals of promoting the widespread dissemination of EBPs and supporting their effective implementation and sustainability through proactive technical assistance. This collaboration serves as a case study in the potential of such a research-policy partnership, which in combination with the emerging body of research related to community coalitions and community–university partnerships can better inform similar collaborations at the state level. Building upon the Interactive Systems Framework (Wandersman et al. 2008) as a conceptual model for understanding such partnerships, we expand upon the role of policy makers/funders in bringing EBPs to scale, while highlighting the critical role that state-level prevention support systems play in linking researchers, practitioners, and policy makers/funders; maximizing opportunities for the study of the large-scale EBP implementation under non-research conditions; and effecting continuous quality improvement. Lessons learned through the PRC–PCCD partnership provide considerations for those embarking on similar partnerships and indicate areas where further work is needed.
History and Evolution of the Partnership
In 1998 the Pennsylvania Commission on Crime and Delinquency (PCCD) approached the Penn State Prevention Research Center (PRC) to conduct a process study of the Communities That Care initiative (Hawkins et al. 2002) that had been piloted in a handful of communities over the previous 3 years. The Communities That Care process introduced a public health approach and represented a new model for the way communities and service systems could address youth mental health and behavior problems. As such PCCD was interested in knowing whether local practitioners and decision makers could embrace such a significant paradigm shift and implement the model with fidelity. The PRC’s mixed-methods study reached two important conclusions (Feinberg et al. 2002):
-
1.
Although the concept of multi-agency collaboration had long been embraced by funders and policy makers, creating and maintaining well-functioning community prevention coalitions was very challenging and communities had little guidance or expertise on which to draw; and
-
2.
Communities were able to adequately follow the initial steps of the Communities That Care process, collecting local epidemiological data on risk and protective factors and using the data to establish prevention priorities, but they struggled with the next step of selecting and implementing research-based strategies to address their targeted priorities.
The PRC presented these findings to PCCD, along with recommendations to address these barriers and strengthen the initiative. In response PCCD (1) awarded a contract to the Center for Juvenile Justice Training and Research to create a technical assistance center to support Communities That Care coalitions throughout Pennsylvania, and (2) established a second state-funded initiative to provide grants for the adoption and implementation of a menu of specific research-based prevention and intervention programs targeting children and youth. Through these two initiatives (Communities That Care and EBP) over the course of the subsequent decade more than 100 community prevention coalitions have been created and grants distributed to fund nearly 200 replications of EBPs throughout the state (Meyer-Chilenski et al. 2007). Recent studies have shown that in communities where this combination of Communities That Care coalitions and EBPs has been adopted, population-level rates of delinquency and substance abuse are significantly lower, and adolescent developmental trajectories show sustained improvement over time (Brown et al. 2010; Feinberg et al. 2010). Further, counties where these EBPs have been adopted have seen lower rates of costly out-of-home placement of delinquent and dependent youth (Bumbarger et al. 2010). However, these successes did not come immediately or without challenges.
Many of the initial sites funded to implement EBPs struggled with training, startup, model fidelity, and sustaining the programs beyond the period of PCCD seed-funding. In the early years of the initiative, PCCD provided for limited reactive technical assistance when such problems arose, but quickly recognized that supporting community coalitions through training and technical assistance and providing funding for the implementation of EBPs would not impact population-level outcomes unless the barriers of implementation quality and sustainability were addressed in a more structured and proactive way (Rhoades et al., under review). PCCD also recognized that with the growing movement in support of EBPs, other state agencies were also funding some of these same programs within their separate systems. As a result, in 2008 PCCD partnered with the Pennsylvania Department of Public Welfare (specifically the Office of Mental Health and Substance Abuse Services, and Office of Children, Youth, and Families) to support the development of the Evidence-based Prevention and Intervention Support Center (EPISCenter; see www.episcenter.psu.edu). The goals of the EPISCenter, which functions as a unit within the PRC, are to promote the greater use and support of evidence-based prevention and intervention throughout Pennsylvania, provide technical assistance to communities and providers implementing EBPs, and to conduct and disseminate translational research to increase the knowledge base.
Approach to Collaboration (Structure, Funding, and Theory)
The EPISCenter is overseen by a multi-agency steering committee (the Resource Center Steering Committee for Evidence-based Programs and Practices) that includes representation from the primary funders: the Pennsylvania Commission on Crime and Delinquency (PCCD) and the Department of Public Welfare, as well as the state Departments of Education and Health (including the Bureau of Drug and Alcohol Programs), and the Juvenile Court Judges Commission (a cabinet-level state agency with responsibility for the juvenile court and probation). This multi-agency oversight is important because it represents essentially all of the sources of prevention and intervention funding that impact local communities. The Resource Center Steering Committee also provides oversight to a related initiative operated through the National Center for Juvenile Justice, aimed at identifying promising juvenile justice programs and practices that haven’t yet been rigorously evaluated. Funding for both efforts comes jointly from PCCD and the Department of Public Welfare through a combination of state budget and federal pass-through funds. The Resource Center Steering Committee meets quarterly with the EPISCenter and the National Center for Juvenile Justice to establish priorities, review progress, and discuss new data on the implementation and impact of EBPs.
The partnership between the EPISCenter, the state, and local practitioners is guided by a conceptual model that expands upon the Interactive Systems Framework (ISF; Wandersman et al. 2008). The ISF expands on the Institute of Medicine’s Research-to-Practice model (Mrazek and Haggerty 1994) by explicating the stage at which effective interventions are taken to scale and ultimately delivered to consumers—identifying the systems vital to large-scale dissemination, implementation, and sustainability of EBPs (Glasgow et al. 2003; Rohrbach et al. 2006; Wandersman et al. 2008). Specifically, the ISF differentiates the Prevention Synthesis and Translation System, the Prevention Delivery System, and the Prevention Support System (Fig. 1).
The Prevention Synthesis and Translation System is responsible for conducting basic prevention science research, synthesizing this knowledge into interventions, and conducting and disseminating translational research (Rohrbach et al. 2006). The Prevention Delivery System represents the practitioners and providers ultimately responsible for delivering prevention and intervention services to consumers (children and families). The Prevention Support System serves as the “bridge” between the Prevention Synthesis and Translation System and the Prevention Delivery System, providing support to and connecting researchers to practitioners to facilitate bi-directional knowledge transfer (Chinman et al. 2008; Flaspohler et al. 2008a; Hunter et al. 2009). In addition, each of these systems operates within an important macro-system of policy and funding. In the adaptation of the ISF that guides our partnerships (Fig. 2) we clarify the role of the policy and funding macro-system as a key, active partner in these processes (Bruns et al. 2008). The introduction of the ISF has been an important theoretical model to guide our partnership and has helped us conceptualize the systems and interactions that are necessary but in practice are not always present or well-connected (Hallfors et al. 2007; Midgley 2009). In particular, prior to the ISF the Prevention Support System had not been specifically promoted as a key system for connecting science to policy and practice, and such a prevention support system is often completely absent (as was the case in Pennsylvania prior to the establishment of the EPISCenter). The absence of a Prevention Support System can result in EBPs being promoted or mandated but not adequately supported, and policy and funding decisions occurring without input from prevention researchers and practitioners (Emshoff 2008).
Within our expanded ISF model the points of interaction represented by the bi-directional arrows signify the opportunities for partnership between the stakeholders. When such a Prevention Support System is present and well connected to the other stakeholders it can create a strong partnership among policy makers/funders, researchers, and practitioners, increasing the capacity of each and addressing the key barriers to improved children’s mental health (Livet et al. 2008; Wilkinson et al. 2009).
The Prevention Support System links prevention researchers directly to the funding and policy macro-system, while simultaneously providing opportunities to support EBPs and study the issues related to taking EBPs to scale under real-world conditions (Bumbarger and Perkins 2008). Beyond strengthening each of the systems individually, the Prevention Support System serves a connecting function essential for establishing the feedback loops that lead to continuous quality improvement (Bickman 2008). Within this partnership context the EPISCenter facilitates capacity building by conducting and translating research on key prevention topics and then using this research to inform the technical assistance and guidance provided to EBP practitioners and policy-makers. These research topics have included barriers to dissemination, implementation, and fidelity of EBPs; the causes for and nature of adaptation; the characteristics that predict sustainability; the predictors of community coalitions’ effectiveness at supporting the implementation of EBPs; and the overall public health impact of this infrastructure on population level youth outcomes (Brown et al. 2010; Bumbarger and Perkins 2008; Dariotis et al. 2008; Feinberg et al. 2010; Tibbits et al. 2010).
Collectively, this body of research addresses timely issues for practitioners and policy makers and subsequently informs our view of the capacity necessary to take EBPs to scale (Domitrovich et al. 2008; Greenberg 2004). Perhaps most importantly it represents a partnership that has mutually benefitted all three stakeholder groups: practitioners receive well-informed technical assistance to address key barriers to EBP implementation and sustainability; state agency policy makers and funders get timely feedback to continuously improve their initiatives and are better able to demonstrate the quality and impact of their management of taxpayer funds; and researchers get access to a unique large-scale test bed for translational research and the opportunity to have their research result in timely and meaningful impact on the field.
Our decade-long partnership with Pennsylvania communities and policy makers has been incredibly rewarding, and at times incredibly challenging. Beyond increasing the knowledge base of moving science to practice, this collaboration has left us with a rich body of very practical knowledge in the important art of partnership itself. Although the partnership described here grew from PCCD approaching the PRC with a specific research/evaluation need, it could just as likely have been the reverse, and we believe the lessons in effective partnership are broadly generalizable. Below we have organized these lessons learned into a group of overarching themes that may be helpful as others embark on similar partnerships.
Lessons Learned
Although partnership—be it through coalitions, collaborations, or community-based participatory research—is universally seen as a more effective and efficient way to work toward better children’s mental health outcomes, partnership alone is not a panacea nor is it easy. It is in fact its own emerging science, and very challenging to apply effectively. Substantial science related to community coalitions and community–university partnerships has emerged in recent years, identifying some of the predictors of partnership success and sustainability (Arthur et al. 2010; Brown et al. 2010; Greenberg et al. 2007; Nowell 2009; Rolleri et al. 2008). This emerging knowledge base, coupled with case studies such as the Pennsylvania collaboration presented here, can better inform research–policy partnerships at the state level. In our partnership journey in Pennsylvania over the last decade we have learned a number of valuable lessons.
Applied Science Must be Applied Differently
The science that takes place in the context of a research–policy partnership must necessarily be different than science primarily for the sake of empirical inquiry and knowledge-building. While the latter may take decades and involve careful development and experimental testing of theory, the former must be rapid, relevant and accessible. The questions addressed cannot come primarily from academic curiosities but from the immediate needs and priorities of practitioners and policymakers. While this may seem obvious, consider for example the policy relevance of research on cost-benefit ratios of various children’s mental health interventions versus research on the genetic correlates of childhood mental illness. A quick PsycINFO review of the English-language, child and adolescent literature for “children’s mental health AND cost-benefit” produces only five citations, while “genetic AND children’s mental health” produces sixty-two. The point here is that although science should inform policy and practice, the research literature does not necessarily lend itself to the immediate needs of practitioners and policy-makers.
Likewise, such a research–policy partnership must be conducted as applied research (i.e., it should be conducted with an eye toward drawing firm conclusions and practical recommendations for policy and practice). Unlike conventional research, which often results in more caveats and calls for further study than conclusions, science that takes place within the context of such policy partnership must end with clear and unequivocal answers and recommended actions. This may mean the researcher is “forced” to advocate a course of action, and when the evidence is mixed or seems ambiguous the researcher must consider the preponderance of that evidence and “choose a side”. We have also found it useful when working with practitioners and policy makers to speak of “evidence” as representing a place along a “continuum of confidence” as opposed to being present or absent. Rather than focusing on whether or not a particular program or practice is “on the list” we encourage them to consider “how confident can I be that this program or practice will result in the kinds of outcomes I’m concerned with?”
Finally, the research findings and subsequent recommendations must be communicated in media common to and accessible by practitioners and policy makers. The currency of this research is one-page fact sheets with a few bullet points and plenty of white space; manuscripts must be transformed into PowerPoint presentations, 2- to 4-page research briefs and talking points, and 3-min YouTube videos. In this regard, the emerging use of knowledge visualization and info-graphics becomes increasingly important as our access to information and data exponentially increases and our ability to effectively manage and make practical use of that data and information struggles to keep up (Zupan et al. 2006).
Recognize What Motivates Behavior Change
Much of the success in the dissemination of EBPs has come through mandates and grant opportunities. When practitioners and communities adopt EBPs primarily because of the extrinsic motivation of funding or to comply with policy requirements, they are less likely to commit to quality implementation and less likely to sustain these efforts after seed funding ends. Consider that policy makers and practitioners in children’s mental health operate in systems that have been in place for over 100 years in the United States, and that the emerging evidence-base informing our knowledge of “what works” is less than half that age. In the absence of empirical evidence, these systems evolved based on faith and good intentions. It is challenging then to convince practitioners and policy makers to embrace an empirical yardstick, especially when the tools and resources to apply that yardstick are lacking and when the extrinsic motivators of politics and funding seem to push them toward different goals. Because EBPs are often more complex, costly, and time- and resource-intensive than simpler and less costly alternatives, the intrinsic desire to improve youth outcomes is often in conflict with the realities of agency management, budgets, and practitioner support, because the barriers to quality implementation and sustainability outweigh the potential benefits. The result is that providers (and policy makers) can simultaneously embrace and ignore evidence-based practice.
To address this inherent conflict we have found it valuable to promote within practitioners and policy makers the desire to achieve measurable improvement in children’s mental health, and the expectation that such measureable improvement can in fact be accomplished. In our experience we have found that the most effective way to create such intrinsic motivation to embrace EBPs has been by (1) correcting the imbalance of barriers and rewards; and (2) creating the capacity to demonstrate that the EBPs are in fact resulting in improved outcomes for youth, reduced burden on systems, or other effects that are timely and relevant to the stakeholders involved. Even in this new era of accountability, it is still not the norm for practitioners or state-level stakeholders to be able to demonstrate meaningful impacts from programs and services, using reliable metrics that associate particular outcomes with specific interventions or practices. In this case helping practitioners (and the policy makers who fund them) to reliably measure and communicate program impact is a win–win partnership for researchers interested in diffusion of innovation.
Correcting the above mentioned imbalance of barriers and rewards (with the goal of increasing intrinsic motivation) involves purposefully identifying and addressing the challenges to adopting and implementing EBPs with quality and fidelity, and sustaining them long term to affect successive cohorts. This requires (1) a meaningful and stable infrastructure; (2) local epidemiological data that can be used to more efficiently and effectively focus resource allocation; (3) thoughtful selection of programs and practices that target prioritized risk factors, including careful consideration of their evidence of effectiveness as well as local fit and feasibility; (4) adequate organizational readiness for initial startup, including staff skills, resources, and buy-in; (5) adequate initial training not only in the mechanics of delivering the intervention or utilizing the practice, but also in a deep understanding of its logic model and how it ultimately effects behavioral change; (6) the tools, processes, and capacity for ongoing collection, analysis, and feedback from data on both implementation and (proximal and distal) outcomes, including clinical supervision, coaching/mentoring, and technical assistance; and (7) adequate sustainability planning to effectively move successful programs and practices from innovation to common organizational practice.
These seven prerequisites are often inadequately addressed by policymakers and funders, and subsequently overwhelm providers and practitioners when they attempt to adopt EBPs. Fortunately, we have found that as prevention researchers connected to both practitioners and policy makers we are well-positioned to help address these specific needs. Across multiple replications of EBPs many common barriers emerged (i.e., most of these challenges are not unique to a particular community), making it efficient to address them at scale through training and technical assistance or through policy changes. By helping practitioners overcome these barriers, and assisting them in documenting EBP impacts, researchers can create the intrinsic desire in both practitioners and policy makers to adopt a results-based organizational philosophy (Ganju 2006; Huang et al. 2003). Table 1 provides examples of some of the resources developed and policies enacted to address these barriers.
Partnership Requires Balance and Compromise
An important (and progressive) aspect of Pennsylvania’s initiative and its subsequent success in improving youth outcomes has been the balanced approach the Pennsylvania Commission on Crime and Delinquency (PCCD) has taken toward promoting both evidence-based practice and “practice-based evidence” (Chagnon et al. 2010; Hage et al. 2007; Massatti et al. 2008; Midgley 2009; Wells et al. 2004). While PCCD’s initiative recognizes the value in promoting empirically-validated interventions and practices, it also acknowledges the prospect (and potential value) of programs and practices that already have widespread support and uptake but haven’t undergone any rigorous experimental evaluation. Without diminishing the integrity of the science behind EBPs, PCCD is working with the National Center for Juvenile Justice to identify a small number of these “locally-grown” innovations that are most likely to stand the test of empirical trial. After a period of self-assessment guided by a set of established criteria (focusing on evidence of efficacy, fit, and feasibility for dissemination), agencies or providers submit their “best practices” to a state panel of researchers and policy makers for independent assessment and consideration. The process itself helps bring science to the field while also facilitating the organic emergence of this “wisdom guided by knowledge”. Aside from being a rational acknowledgement of the limitations of the current lists of EBPs (Midgley, 2009), this process recognizes the knowledge and expertise of local practitioners and represents an excellent example of the compromise often inherent in good partnership. It stands in contrast to the linear, top-down process of knowledge transfer that often results in a blind promotion of EBP lists and confuses the goal of diffusion of innovation with the goal of improved children’s mental health.
Effective Partnerships Pursue Shared Objectives (and Respect Goals That Aren’t Shared)
At the foundation of good partnership is the opportunity and capacity for each partner to help meet the needs and achieve the goals of the other, while simultaneously meeting their own needs and pursuing their own goals (Daniels and Sandler 2008; Macaulay and Nutting 2006; Nowell 2009; Schensul 2009). As in a Venn diagram, the needs and goals of each partner are rarely exactly the same but often have some overlap that represents the opportunity for partnership. An additional but often overlooked aspect of successful partnership however is the recognition of, and respect for, the partner’s needs and goals that do NOT overlap with our own. For example, as researchers we have the need to publish, to use valid and reliable measures, to examine mediators and moderators beyond main effects, and to comply with Institutional Review Board procedures. Policy makers and funders may have compliance or political needs, such as the need to have all of their funds encumbered by the end of a fiscal year, or to distribute resources evenly across regions or communities (or to support specific communities regardless of their capacity or readiness). Providers have the need to preserve their workforce and to maintain positive relationships with key stakeholders. The greatest challenge to successful partnership is effectively communicating, understanding and honoring each partner’s own hierarchy of needs and actively building the recognition of those needs into project planning. In our technical assistance with community coalitions, we promote this process through an activity called stakeholder mapping (Ostrom et al. 1995), where each stakeholder lists his or her organization’s goals and priorities (including sometimes unofficial or unspoken priorities), as well as their organizations’ strengths and needs. The group then identifies and prioritizes shared goals and priorities, and matches strengths with needs. In the process, the unshared needs and goals are also acknowledged.
Summary and Future Directions
A scientific revolution in the last half-century has created great potential for achieving significant population-level improvement in children’s mental health and the prevention of youth problem behaviors through the use of evidence-based programs and practices. However, to attain such a lofty goal we need to realize sustained, large-scale implementation of these EBPs with sufficient quality and fidelity, and this will require strong partnerships among researchers, policymakers, and practitioners organized at the state level. The state agency–university partnership between PCCD and the PRC that has taken place in Pennsylvania over the last decade serves as a case study for such a sustained and mutually-beneficial enterprise, and provides important lessons for effective collaboration.
Although there has been considerable research recently on the characteristics and predictors of successful collaboration at the community level, the facilitators and barriers of EBP implementation and sustainability, and the influence of research on policy, the fields of research, practice, and policy in children’s mental health remain significantly disconnected. This type of research–policy partnership, particularly at the state level, is an important area for future study. In particular, the potential relevance of the current knowledge base from community collaboration, community-based participatory research, and community–university partnerships to state-level systems is unknown. An emerging body of research on when and how research influences policy will also make important contributions in this area.
Likewise, there are lessons and practices within community-level planning that can be applied to such state-level research–policy partnerships. The PCCD initiative has resulted in nearly 200 replications of a diverse menu of EBPs, but we must continually remind ourselves that dissemination of EBPs is not the goal, but a means to an end (e.g., improved behavioral and health outcomes for youth). Similarly from a provider perspective the goal is not simply to get an EBP grant from the state to keep a program operating or keep therapists’ salaries covered. When partners with not-perfectly-overlapping needs and goals move down the path of collaborative action there is a natural tendency to experience “mission drift”, either confusing means and ends as described above or simply mistaking action for progress. Just as we teach communities to use logic models and careful program planning to articulate the flow of inputs, outputs and outcomes and to establish benchmarks for progress, partnerships between researchers and policy makers or practitioners can likewise gain from such careful planning. Especially at this macro-level, human and social services could benefit from outcome-focused models such as Getting To Outcomes (Wandersman et al. 2000) or Results-based Accountability (Friedman 1997, 2005), or private-sector business management models such as Six Sigma (Tennant 2001).
Perhaps the most critical need for future research is the development and utilization of sophisticated data management platforms. There are few examples of an institutionalized system for providing coaching and support for implementers based on real-time feedback from implementation monitoring data. Nurse Family Partnership (Olds et al. 2007) and Multisystemic Therapy (Schoenwald et al. 2008) are notable exceptions, though their data systems are primarily designed to support training infrastructure. No such system exists for any universal prevention programs or for serving multiple EBPs and stakeholders. Frequently, prevention funding has requirements for reporting program impact in some fashion. However, the outcomes reported are often gross measures whose primary purpose is simply to meet reporting requirements and ensure accountability for the distribution of grant funds, rather than tools for program monitoring and improvement (Bickman 2008; Daniels and Sandler 2008; Johnson et al. 2004). This problem is amplified when an agency’s funding efforts support a broad range of EBPs that represent diverse target populations, risk and protective factors and behavioral outcomes, all of which the funder must somehow aggregate to demonstrate the impact of the initiative as a whole (Elliott and Mihalic 2004; Meyer-Chilenski et al. 2007). This challenge is becoming more common as policy makers promote a menu of EBPs rather than an individual preventive intervention for which a well-defined system of assessment can be tailored (Hallfors et al. 2007; Hawkins et al. 2002). Finally, in typical “accountability” systems data are generally reported by calendar quarter or annually, an arbitrary and static increment that often has little bearing on the practical operation of the program and ignores the important “stages” of program adoption, implementation, and sustainability identified by Rogers and others (Bickman 2008; Rogers 1995). A robust body of research is needed to create dynamic data collection and management systems developed with the needs of all three stakeholder groups (researchers, providers, and funders) in mind, and with the capacity to provide real-time feedback to inform resource allocation and continuous quality improvement. In the context of research–policy partnerships there is great potential for such “dashboard” systems to simultaneously improve research, policy, and practice and to strengthen these partnerships at the state level by mutually supporting the needs of each stakeholder.
References
Arthur, M. W., Hawkins, J. D., Brown, E. C., Briney, J. S., Oesterle, S., & Abbott, R. D. (2010). Implementation of the Communities That Care prevention system by coalitions in the Community Youth Development Study. Journal of Community Psychology, 38(2), 245–258. doi:10.1002/jcop.20362.
Bickman, L. (2008). A measurement feedback system (MFS) is necessary to improve mental health outcomes. Journal of the American Academy of Child and Adolescent Psychiatry, 47(10), 1114–1119.
Brown, L. D., Feinberg, M. E., & Greenberg, M. T. (2010). Determinants of community coalition ability to support evidence-based programs. Prevention Science, 11(3), 287–297.
Bruns, E. J., Hoagwood, K. E., Rivard, J. C., Wotring, J., Marsenich, L., & Carter, B. (2008). State implementation of evidence-based practice for youths, part II: Recommendations for research and policy. Journal of the American Academy of Child and Adolescent Psychiatry, 47(5), 499–504.
Bumbarger, B. K., Moore, J., & Rhoades, B. (2010). Impact of evidence-based interventions on delinquency placement rates. Poster presentation at 2011 Society for Prevention Research annual meeting. Washington, DC.
Bumbarger, B. K., & Perkins, D. F. (2008). After randomised trials: Issues related to dissemination of evidence-based interventions. Journal of Children’s Services, 3(2), 53–61.
Bumbarger, B. K., Perkins, D. F., & Greenberg, M. (2010). Taking effective prevention to scale. In Handbook of youth prevention science (pp. 433–444). New York, NY: Routledge.
Chagnon, F., Pouliot, L., Malo, C., Gervais, M., & Pigeon, M. (2010). Comparison of determinants of research knowledge utilization by practitioners and administrators in the field of child and family social services. Implementation Science, 5(1), 41. doi:10.1186/1748-5908-5-41.
Chinman, M., Hunter, S. B., Ebener, P., Paddock, S. M., Stillman, L., Imm, P., et al. (2008). The getting to outcomes demonstration and evaluation: An illustration of the prevention support system. American Journal of Community Psychology, 41(3–4), 206–224.
Daniels, V., & Sandler, I. (2008). Use of quality management methods in the transition from efficacious prevention programs to effective prevention services. American Journal of Community Psychology, 41(3–4), 250–261.
Dariotis, J. K., Bumbarger, B. K., Duncan, L. G., & Greenberg, M. T. (2008). How do implementation efforts relate to program adherence? Examining the role of organizational, implementer, and program factors. Journal of Community Psychology, 36(6), 744–760.
Domitrovich, C. E., Bradshaw, C. P., Poduska, J., Hoagwood, K., Buckley, J. A., Olin, S. S., et al. (2008). Maximizing the implementation quality of evidence-based preventive interventions in schools: A conceptual framework. Advances in School Mental Health Promotion, 1(3), 6–28.
Elliott, D. S., & Mihalic, S. (2004). Issues in disseminating and replicating effective prevention programs. Prevention Science, 5(1), 47–52.
Emshoff, J. G. (2008). Researchers, practitioners, and funders: Using the framework to get us on the same page. American Journal of Community Psychology, 41(3–4), 393–403.
Feinberg, M. E., Greenberg, M. T., Osgood, D. W., Anderson, A., & Babinski, L. (2002). The effects of training community leaders in prevention science: Communities That Care in Pennsylvania. Evaluation and Program Planning, 25(3), 245–259.
Feinberg, M. E., Jones, D., Greenberg, M. T., Osgood, D. W., & Bontempo, D. (2010). Effects of the Communities That Care model in Pennsylvania on change in adolescent risk and problem behaviors. Prevention Science, 11(2), 163–171.
Flaspohler, P., Anderson-Butcher, D., & Wandersman, A. (2008a). Supporting implementation of expanded school mental health services: Application of the interactive systems framework in Ohio. Advances in School Mental Health Promotion, Training and Practice Research and Policy, 1(3), 38–48.
Flaspohler, P., Duffy, J., Wandersman, A., Stillman, L., & Maras, M. A. (2008b). Unpacking prevention capacity: An intersection of research-to-practice models and community-centered models. American Journal of Community Psychology, 41(3–4), 182–196.
Friedman, M. (1997). A guide to developing and using performance measures in results-based budgeting. Washington, DC: The Finance Project.
Friedman, M. (2005). Trying hard is not good enough: How to produce measurable improvements for customers and communities. Bloomington, IN: Trafford Publishing.
Ganju, V. (2006). The need for an evidence-based culture: Lessons learned from evidence-based practices implementation initiatives. National Association of State Mental Health Program Directors. Retrieved online at http://www.nri-inc.org/reports_pubs.
Glasgow, R. E., Lichtenstein, E., & Marcus, A. C. (2003). Why don’t we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition. American Journal of Public Health, 93(8), 1261–1267.
Greenberg, M. T. (2004). Current and future challenges in school-based prevention: The researcher perspective. Prevention Science, 5(1), 5–13.
Greenberg, M. T., Domitrovich, C., & Bumbarger, B. K. (2001). The prevention of mental disorders in school-aged children: Current state of the field. Prevention & Treatment, 4, Article 1.
Greenberg, M. T., Feinberg, M. E., Meyer-Chilenski, S., Spoth, R. L., & Redmond, C. (2007). Community and team member factors that influence the early phase functioning of community prevention teams: The prosper project. The Journal of Primary Prevention, 28(6), 485–504.
Hage, S. M., Romano, J. L., Conyne, R. K., Kenny, M., Matthews, C., Schwartz, J. P., et al. (2007). Best practice guidelines on prevention practice, research, training, and social advocacy for psychologists. Counseling Psychologist, 35(4), 493–566.
Hallfors, D. D., Pankratz, M., & Hartman, S. (2007). Does federal policy support the use of scientific evidence in school-based prevention programs? Prevention Science, 8(1), 75–81.
Hawkins, J. D., Catalano, R. F., & Arthur, M. W. (2002). Promoting science-based prevention in communities. Addictive Behaviors. Special Issue: Integration substance abuse treatment and prevention in the community, 27(6), 951–976.
Huang, L. N., Hepburn, K. S., & Espiritu, R. C. (2003). To be or not to be…evidence-based? National Technical Assistance Center for Children’s Mental Health. Retrieved online at www.ntaccmh.org.
Hunter, S. B., Paddock, S. M., Ebener, P., Burkhart, A. K., & Chinman, M. (2009). Promoting evidence-based practices: The adoption of a prevention support system in community settings. Journal of Community Psychology, 37(5), 579–593.
Johnson, K., Hays, C., Center, H., & Daley, C. (2004). Building capacity and sustainable prevention innovations: A sustainability planning model. Evaluation and Program Planning, 27(2), 135–149.
Livet, M., Courser, M., & Wandersman, A. (2008). The prevention delivery system: Organizational context and use of comprehensive programming frameworks. American Journal of Community Psychology, 41(3–4), 361–378.
Macaulay, A. C., & Nutting, P. A. (2006). Moving the frontiers forward: Incorporating community-based participatory research into practice-based research networks. Annals of Family Medicine, 4(1), 4–7.
Massatti, R. R., Sweeney, H. A., Panzano, P. C., & Roth, D. (2008). The de-adoption of innovative mental health practices (IMHP): Why organizations choose not to sustain an IMHP. Administration and Policy in Mental Health and Mental Health Services Research. Special Issue: Improving mental health services, 35(1–2), 50–65.
Meyer-Chilenski, S., Bumbarger, B., Kyler, S., & Greenberg, M. T. (2007). Reducing youth violence and delinquency in Pennsylvania: PCCD’s Research-based Programs Initiative. University Park, PA: The Prevention Research Center, The Pennsylvania State University.
Midgley, N. (2009). Editorial: Improvers, adapters and rejecters—The link between ‘evidence-based practice’ and ‘evidence-based practitioners’. Prevention Action, 14(3), 323–327.
Mrazek, P. J., & Haggerty, R. J. (1994). Reducing risks for mental disorders: Frontiers for preventive intervention research. Washington, DC: National Academies Press.
Nowell, B. (2009). Profiling capacity for coordination and systems change: the relative contribution of stakeholder relationships in interorganizational collaboratives. American Journal of Community Psychology, 44(3), 196–212.
Olds, D. L., Kitzman, H., Hanks, C., Cole, R., Anson, E., & Sidora-Arcoleo, K. (2007). Effects of nurse home visiting on maternal and child functioning: Age-9 follow-up of a randomized trial. Pediatrics, 120(4), 832–845.
Ostrom, C. W., Lerner, R. M., & Freel, M. A. (1995). Building the capacity of youth and families through university–community collaborations: The Development-In-Context Evaluation (DICE) Model. Journal of Adolescent Research, 10(4), 427–448.
Rhoades, B., Bumbarger, B. K., & Moore, J. (under review). The role of a state-level prevention support system in promoting high-quality implementation and sustainability of evidence-based programs. American Journal of Community Psychology.
Rogers, E. (1995). Diffusion of innovations (4th ed.). New York: The Free Press.
Rohrbach, L. A., Grana, R., Sussman, S., & Valente, T. W. (2006). Type II translation: Transporting prevention interventions from research to real-world settings. Evaluation & the Health Professions, 29(3), 302–333.
Rolleri, L. A., Wilson, M. M., Paluzzi, P. A., & Sedivy, V. J. (2008). Building capacity of state adolescent pregnancy prevention coalitions to implement science-based approaches. American Journal of Community Psychology, 41(3–4), 225–234.
Schensul, J. J. (2009). Community, culture and sustainability in multilevel dynamic systems intervention science. American Journal of Community Psychology, 43(3–4), 241–256.
Schoenwald, S. K., Sheidow, A. J., & Letourneau, E. J. (2008). Toward effective quality assurance in evidence-based practice: Links between expert consultation, therapist fidelity, and child outcomes. Journal of Clinical Child and Adolescent Psychology, 33, 94–104.
Tennant, G. (2001). SIX SIGMA: SPC and TQM in manufacturing and services. Aldershot, UK: Gower Publishing, Ltd.
Tibbits, M. K., Bumbarger, B. K., Kyler, S. J., & Perkins, D. F. (2010). Sustaining evidence-based interventions under real-world conditions: Results from a large-scale diffusion project. Prevention Science, 11(3), 252–262.
Wandersman, A., Duffy, J., Flaspohler, P., Noonan, R., Lubell, K., Stillman, L., et al. (2008). Bridging the gap between prevention research and practice: The interactive systems framework for dissemination and implementation. American Journal of Community Psychology, 41(3–4), 171–181.
Wandersman, A., Imm, P., Chinman, M., & Kaftarian, S. (2000). Getting to outcomes: A results-based approach to accountability. Evaluation and Program Planning, 23(3), 389–395.
Wells, K., Miranda, J., Bruce, M. L., Alegria, M., & Wallerstein, N. (2004). Bridging community intervention and mental health services research. The American Journal of Psychiatry, 161(6), 955–963.
Wilkinson, A., Papaioannou, D., Keent, C., & Booth, A. (2009). The role of the information specialist in supporting knowledge transfer: A public health information case study. Health Information and Libraries Journal, 26, 118–125.
Zupan, B., Holmes, J. H., & Bellazzi, R. (2006). Knowledge-based data analysis and interpretation. Artificial Intelligence in Medicine, 37(3), 1163–1165.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Bumbarger, B.K., Campbell, E.M. A State Agency–University Partnership for Translational Research and the Dissemination of Evidence-Based Prevention and Intervention. Adm Policy Ment Health 39, 268–277 (2012). https://doi.org/10.1007/s10488-011-0372-x
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10488-011-0372-x