In the movie Field of Dreams, Kevin Costner’s lead character is inspired by a disembodied voice, who offers the following guidance with respect to a baseball field. “Build it and they will come.” He did—and they did.

It’s clear that the mysterious advisor had little experience in social technologies and prevention programs in particular. Those of us who have been involved in the creation of such programs know that the domains of builders and users are often worlds apart. As such, we might utter the following advice in the ears of those engaged in developing prevention programs.

Build it—and they will never know about it.

Build it—and they will hear about it but not understand what it is.

Build it—and they will not feel invited.

Build it—and they will want to come, but not know how to get there.

Build it—and they won’t think the seats will fit if they do come

Build it—and they will think they already have one

Build it—and they will find it irrelevant to their needs or users

Build it—and they will decide they should build their own

Build it—and they won’t be able to afford to come

Build it—and they won’t be able to read the map to get there

Build it—and they will assign it to a committee to consider (and then forget about) it

Build it—and they will come and rebuild it into something unrecognizable.

Build it—and they will come and love it! And ask you and your one assistant to build ten more just like it in surrounding communities. Now.

In the world of prevention, the good news is that we are building more effective programs and that there is more demand for such effective prevention programs. Policy makers and administrators are encouraging (or insisting upon) the use of evidence-based programs and practice (EBP). This trend is the result of a confluence of influences and dynamics. Funders have become more concerned about efficient use of resources, especially in areas that have experienced funding cuts. Practitioners and administrators are confused and sometimes frustrated by the broad array of program choices and need a guide for making decisions. Consumers are expecting the best possible services. The issue of accountability cuts across all of these stakeholders. Concurrently, the prevention research field has matured over the past 20+ years to the point where there exists an inventory of effective programs that work. As awareness of this research base diffuses through the worlds of practice, policy and public awareness, stakeholders want to take advantage of this accrual of knowledge.

However, there are still barriers to the efficient transfer of technology from the domain of knowledge creation (research) to the domain of knowledge utilization (practice) as illustrated by the multiple ineffective “build it” outcomes stated above. The work of the research community is often uninformed regarding community needs and demand. There is confusion about how to best package and disseminate this knowledge. Users are often unable to effectively implement what has been offered to them. All of these barriers are exacerbated by differences in language, cultures, roles, and responsibilities within the research and practice communities. While the literature about dissemination, adoption and implementation continues to grow, there are few unifying concepts or structures to help guide both research and action.

Special Issue Focus—The Framework

Into this context steps the Interactive Systems Framework for Dissemination and Implementation (ISF), created through the collaborative efforts of the Centers for Disease Control and Prevention and a research team led by researchers at the University of South Carolina and Miami University (Wandersman et al. 2008). The Framework is a heuristic for understanding and focusing discussion on the relationships among researchers, practitioners and those that seek to facilitate their interaction. It creates a common language and understanding of the interrelationships among these stakeholders. As such, it connects those who develop knowledge (the research community) with those who deliver prevention services). The intermediary part of the framework, known as the Prevention Support System, consists of those resources used to connect the worlds of research and practice.

The framework gives us a place to organize and situate our knowledge base regarding prevention research and practice and identifies our gaps in knowledge. From the action perspective, it helps identify necessary activities within the subsystems that must occur to maximize the efficient and effective use of prevention resources. From the research perspective, it helps identify key questions for further investigation. The framework can also be used to identify the challenges and barriers to the successful bridging of the science-practice gap.

The framework, the issues it raises, and examples of its use are the focus of this special issue. The articles presented represent a variety of theoretical, practical and empirical issues related to this framework and its subsystems (prevention synthesis and translation, prevention support, and prevention delivery). They each contribute to our understanding of the mechanisms by which research can be translated to action, how EBPs can be effectively adopted and implemented, and how all of this can occur with consideration of the needs and values of the communities in which we work.

CDC is to be congratulated for taking a lead role at the federal level with respect to these issues and for matching their vision with their resources. The architects of the framework and the authors of these related papers deserve equal praise for their efforts in increasing our capacity to understand and utilize this information.

Capacity of the Framework’s Subsystems

It’s one thing to describe these systems, dynamics, and the imperative to implement science-based programs. It’s quite another to effectively disseminate and implement EBPs. Just because researchers, funders, theoreticians, and policymakers have decided that EBPs should be employed wherever possible does not mean our local delivery systems have the capacity and commitment necessary to see these processes through. Nor does our excitement over their use ensure that researchers are developing EBPs that can be realistically disseminated and implemented. The Prevention Support System is the theoretical component of the ISF designed to increase such capacities, but to what extent does such support exist?

In a sense, the promotion of evidence-based programs can be considered a “program” and all of the issues associated with adoption, adaptation, and fidelity of implementation can be applied to our attempts to facilitate the use of EBP. In order to do so, we must increase the capacity of each of the systems within the ISF. As an example of the barriers to effective use of EBPs, we know that there is often resistance to the adoption and implementation of programs that seem inconsistent with local values and culture, regardless of their scientific backing. Undoubtedly, there is a lack of consensus at the local level about the value of science as it informs program choice. Depending on one’s role or point of view, this disconnection between science and practice reflects a deficit of capacity within the practice community, the research community (who may fail to consider community realities while designing and testing programs) or the prevention support system.

The extent to which EBPs become commonly and effectively used is dependent upon the capacity of the systems described in the Framework. Flaspohler et al. (2008) describe a taxonomy of the construct of capacity, with consideration of both ecological level (e.g., individual, organizational) and type (general and specific to a specific innovation). Capacity also has different meanings when considered in two different contexts: the research-to-practice model in which innovations are disseminated into communities (usually a top–down approach) and the community-centered model in which innovation is locally developed and incubated. As predicted by attribution theory, failure to implement effective programs is presumed to be a shortcoming of the other by the entity making the attribution. Thus, the top–down approach concludes that there is a lack of capacity among practitioners, while communities often find the programs handed to them to be inadequate for their specific needs and context. The research-to-practice model is currently the dominant practice. Those working from this model operate from the assumption that the gap between science and practice can be closed by training users (usually individuals) to implement a specific, well-defined intervention. Community level and general capacities tend to receive inadequate attention. For instance, too little attention has been given to the general capacity of a community to identify needs, set goals and objectives, assess potential strategies for reaching these goals and objectives, create and implement action plans, and evaluate their effectiveness.

The divergent perspectives represented by the top–down (individually focused, innovation-specific) model and the community-centered (presumably more generally focused) model offer a challenge to the overall capacity of the Framework. These two different models require different kinds of interactions. With respect to each model, we must ask, how do the sub-systems interact? How do the cultures, expectations, assumptions, goals, and experiences of the players within the worlds of research, practice and the intervening support system differ?

While we have put substantial resources into the functions of each of these systems, we have done little to understand or improve their interactions. Until we give such interfaces adequate attention, we can expect the subsystems of the Framework to operate awkwardly, in fits and starts, with energy expended on mismatched components and resulting tensions that detract from, and dilute the resources available for our vision and mission of prevention.

In order to facilitate smoother subsystem coordination, researchers and those that fund them should consider the capacity of the delivery system when designing and testing preventive interventions. Expending precious resources developing and testing interventions that are either too expensive, too complex, or too inconsistent with community values to ever achieve widespread adoption can be considered to lack ecological validity. As such, they are wasteful and might even be considered an unethical diversion of prevention resources.

Examples of Prevention Support Systems

What can be done to increase the capacity of the delivery systems and to better link the delivery system the research system? This is the charge given to the Prevention Support System. While some examples of the Prevention Support System are place-based sites such as centers that serve specific constituencies with specific resources (e.g., staff and consultants), others are transportable technologies such as the Getting To Outcomes intervention described by Chinman et al. (2008). GTO is a multi-step program that provides extensive resources to guide delivery system agents through the processes of needs assessment, program identification, planning, implementation, evaluation, and sustainability (to over-simplify). These researchers found that GTO increased individual capacity and program capacity within two community-based substance abuse prevention coalitions.

Although GTO is a process, not a program, it is important to submit this and similar processes to the same examination we give to programs. What are the core components of GTO? What are the relative contributions of the manual, the training and the technical assistance provided? What will happen when the program (GTO) is implemented by others not associated with its development? If a delivery system has been trained and assisted with the GTO process, can they (and do they) continue to use the system after the assistance is completed? That is, is it sustainable?

Rolleri et al. (2008) describe their work in creating the Adolescent Reproductive Health Prevention Support System. This system was designed to bridge those engaged in conducting and synthesizing research on the effectiveness of adolescent pregnancy preventive interventions with state coalitions charged with facilitating the delivery of these interventions. While GTO is based on a ten-step process and the ARP system is based on seven steps, both are designed to increase the capacity of service delivery systems and are founded on the processes of assessing needs and strengths, planning/selecting intervention strategies, training, implementation, and evaluation of effectiveness.

The ARP system can be differentiated from GTO and other generic prevention support systems, because of its specific substantive focus on adolescent reproductive health. For instance, the identification and promotion of science-based programs in the field of teen pregnancy prevention is loaded with political overtones which must be taken into consideration and which may vary from state to state. The inclusion of coalitions as an intermediary also suggests some differences with other support systems. Coalitions represent a specific type of delivery system with their own cultures and dynamics. In some ways, they too are a prevention support system, as they facilitate the work of local delivery systems and practitioners more than they deliver services directly.

Prevention science seeks to fill the cells of the matrix defined by the meta-question of what programs, delivered by what entities, to what populations, in what settings, produce what results? As those cells are filled, a community decision-maker (e.g., school curriculum specialist) could assess the needs and characteristics of her community and its prevention delivery system, and find the right program to plug in. At that point, the process of “plugging in” the selected program is the most critical variable. That is, will the program be implemented with fidelity? If implementation is appropriate and if sufficient research has been done in diverse settings and with diverse populations, positive outcomes could be expected with some reasonable certainty.

Thus, another role for the Prevention Support System is to help communities monitor the implementation of their programs. Fagan et al. (2008) describe just such an implementation monitoring system which they used in conjunction with the Communities That Care, a framework that exemplifies a Prevention Support System. The monitoring system made use of fidelity assessment instruments, most of which were created by the program developer, while the others were created by CTC. These instruments were complemented and to some extent validated through the concurrent use of program observation. Fidelity was also facilitated through staff training which stressed the importance of implementation. Across 13 programs implemented by 12 communities, they found that the use of this system led to high fidelity of implementation of the core program components in sufficient dosages. Thus, implementation monitoring can increase the capacity of the Prevention Delivery System to implement with fidelity, which presumably leads to better outcomes.

How can we expect the Prevention Delivery System, already over-burdened by the many reporting requirements placed upon it, to add this task of systematically and comprehensively monitoring implementation? First, the processes described by Fagan suggest that this could be facilitated through the Prevention Support System. Second, all programs should already be monitoring implementation both for quality control and for accountability. Third, to the extent that we are confident in the outcomes associated with high fidelity implementation of EBPs, organizations could be relieved from some or all of the responsibility for measuring outcomes and concentrate on measuring implementation. Ironically, such process evaluation was the evaluation norm for many years before “outcomes mania” took hold. Perhaps, we will see a future with more balance, or even a return to an emphasis on process (with outcomes more assured).

We should hold CTCs monitoring system and all examples of Prevention Support Systems (e.g., Getting to Outcomes, the ARP system) to the same standards of outcome effectiveness as those we apply to the Prevention Delivery System. Just as local practitioners try to identify the best (perhaps science-based) programs for delivery within their schools and communities, we need to do research on the effectiveness of practices within the Prevention Support System. In the case of CTCs monitoring system, we would want to know what degree of fidelity would be expected without the monitoring system, but with all other aspects of CTCs support available. What are the relative contributions of the various components (e.g., training, observations) of the monitoring system?

Similarly, it would be important to compare the effectiveness of different prevention support systems (e.g., the Adolescent Reproductive Health Prevention Support System and GTO). How do they compare to a generic program/organizational consultation process? In what ways, are these approaches are distinct and similar? If we assigned communities to engage in these different processes, would we would find a fair amount of commonality in the interventions (e.g., might all encourage assessment of needs and resources as a first step)? What are the relative contributions of these common elements and the distinct elements associated with different support practices?

What tools or processes might be used within these or other Prevention Support Systems or by the other sub-systems of the Framework? Smith-Daniels and Sandler (2008) suggest that the implementation of efficacious prevention programs in community settings could be improved by employing Quality Function Deployment (QFD), a process developed in the manufacturing of products. This approach provides a blend (though they call it an alternative) of the research-to-practice and community-centered models. While the products (in this case, prevention programs) are developed outside of the community, explicit and systematic attention is given to the variety of stakeholders associated with its implementation (recipients, providers, and service organizations). More specifically, client/user/stakeholder needs are intricately incorporated into the development process, as illustrated in their case example of the development and implementation of a court-based program for the children of divorced parents.

While we have much to learn from the business world about marketing, purchasing, distribution channels and consumption, we must also be careful to realize where alternative approaches need to be created in response to the divergent natures of prevention programs and business products, as well as the different cultures in which they are created and implemented. QFD might be used most effectively when the object of dissemination/implementation is truly a product (e.g., a curriculum) as opposed to a practice, policy, or principle.

Disseminating Culture Change

Zeldin et al. (2008) provide an illuminating example of the implementation of a practice. If dissemination, adoption, and implementation of programs represent complex processes, additional challenges are presented when the entity being disseminated is a cultural practice instead of a program. While rare, some programs can be implemented with little in the way of organizational preparation and training. “Out-of-the-box” curricula are explicitly designed to minimize logistical, political, financial, cultural and other barriers to implementation. Part of their appeal is that they require no (or minimal) additional capacity building by the user. The work of Zeldin et al. (2008) represents, in some sense, the other end of the spectrum as they engaged in implementing a new practice, youth-adult partnerships, which changed fundamental power balances within interpersonal relationships. What do we know about how to persuade people to create egalitarian relationships, to share decision-making, and to engage in true partnerships? How do our cultural expectations about the capacities and behavior of youth complicate these change efforts? How do cultural and contextual variations affect the answers to these questions? While the dissemination of neatly packaged products might benefit from the expertise of market researchers, change agents of this sort might also benefit from consultation with anthropologists, political scientists, sociologists, social and community psychologists, those engaged in the science and practice of organizational development, as well as other social scientists focused on values and norms and how they develop and are modified. Those within the PSS who act as bridges or brokers between the worlds of science and practice must therefore have a range of skills and resources that mirror the range of types of innovations being disseminated.

Ozer et al. (2008) provide a second example of the dissemination and implementation of an intervention based on the empowerment and participation of youth within youth-serving settings. Their experiences with youth-based participatory research as a change strategy in schools provide valuable lessons on systems change. For instance, they found that the success of dissemination and implementation of this innovation was dependent on factors at multiple ecological levels—from the quality of relationships and communication among key stakeholders to community level resources and history. As in Zeldin’s work, they found that a key implementation constraint was the degree to which those who traditionally hold power (adults) were open to a transformation in this power relationship. Again, we see the need for the PSS to have skills in the arenas of political and cultural change. Ozer et al. note the different capacities (and related technical assistance) needed to disseminate traditional programs (e.g., mentoring) and those involving changes in roles and power relationships. For instance, for this intervention to be effective, youth had to have political advocacy skills and the PSS had to have the capacity to develop these skills. In general, the youth-led action research process requires a degree of flexibility not usually associated with training and assistance based on static manuals. The authors also introduce another issue to be considered within the Framework, the role of the PSS as a facilitator of sustainability of the intervention.

As the Framework is further developed and modified, it might be valuable to develop parallel versions of the Framework, to represent the diverse nature of the entities being disseminated. Thus, as currently conceptualized, the Framework might be applicable across situations. But as further operationalized, we might see different frameworks to describe the dissemination of programs, practices, policies, and principles. Zeldin et al. describe the use of nine leverage points which helped with the dissemination and implementation of youth-adult partnerships, all of which seem to have general applicability. For example, they found that appealing to self interest (never to be underestimated in change efforts) and using social networks were critical strategies in creating the cultural change of sharing power with youth.

Supporting Fidelity and Adaptation

One function of the PSS might be to help the delivery system negotiate the delicate balancing act between fidelity and adaptation. How is a delivery agent supposed to determine “when to hold ‘em and when to fold ‘em”? Lee et al. (2008) describe just such an approach, known as “planned adaptation.” It is certainly useful to distinguish between such planned approaches and failure to implement with fidelity for less productive reasons (e.g., lack of resources, lack of commitment, lack of knowledge). The authors, like others, conclude that the key to successful adaptation is to stay true to the core components of the program, while adapting others to fit local needs, resources, values, and culture. While no one argues the logic of this, many will question our ability to determine which components are “core.” Ideally, a component research design would be used to assess the relative contribution of several aspects of the intervention to the observed outcomes. While there are instances of such research, most of our interventions remain “black boxes” waiting for someone to open them and assess the relative functions and interactions of their components. In the absence of empirical data, we often rely on program theory—what are the logical and theoretical linkages between activities and outcomes. Theory, often articulated by the program developer, may suggest likely core components, but they remain theoretical until tested.

The adaptation approach described by Lee focuses on adaptations based on differences between the population for whom the program was originally developed and the population for whom it is now intended. While this might be exactly what is called for, it may be the case that such adaptation is either too little or too much. By too little, I would suggest that planned adaptation should consider the entire range of contextual differences between the original and intended settings. In addition to characteristics of the recipients, it might also include differences in the community settings, in the implementing organizations and in the service delivery staff, to name but a few. On the other hand, adapting for the population might require such radical transformation of the program as to change the nature of the program itself, possibly including the program theory. Modifications of a program originally aimed at those aged 16–17 to fit the needs of those aged 14–15 might be possible. But adapting a substance abuse program developed for elementary school children to target those in high school requires such fundamental change as to cross the “threshold of drastic mutation” at which point we need to consider the resulting adaptation a new program. Note that this could occur even without a major change in the program theory. That is, both the original and adapted programs might address issues of knowledge, attitudes, and behavior (e.g., peer resistance) change.

Planned adaptation often occurs in response to cultural distinctions between the program as originally designed and a new setting in which it is implemented. Guerra and Knox (2008) provide a case study of these processes in the context of a violence prevention program adapted for use with immigrant Latino youth. They also present a thoughtful discussion of how culture must be considered in the context of the overall Framework. The culture of the implementing organization, the culture of the client, and their interaction must be considered during the processes of adoption and implementation. As many have noted, cultural differences often impede or inhibit adoption and implementation of innovation with fidelity and may limit the effectiveness of the EBP approach. They suggest the need for cultural competence in the transfer of EBPs which have been intended to be used across diverse populations. Changes in “surface structure” (e.g., language, relevant scenarios) are usually considered useful for different populations, and benign with respect to the program theory. However, there may be settings in which the basic core components or program theory (“deep structure”) need to be examined for their cultural relevance. Adaptations at this level are less common, because of the complexity of these adaptations and the degree to which they threaten the very nature of the program itself. That is, at some point, modifications of the deep structure of a program means that program has lost its identity or relation to the original program of interest, and has been modified into an entirely different (although more culturally relevant) program.

As Guerra and Knox point out, cultural competence must be demonstrated within the prevention support system. How can those within this system help to build local capacity (general and specific to an innovation of interest) without a thorough understanding of those communities whose capacity is being addressed? They demonstrated how culture was considered in the selection and preparation for delivery of a violence prevention program in an immigrant Latino community.

Similarly, adopted programs must be a good cultural fit with the prevention delivery system. The program, its underlying theory and assumptions, and the processes by which it is adopted and implemented, must be culturally compatible with the agency and staff who implement the intervention, as well as with the clients who receive the intervention.

If innovation adaptations are necessary for different cultures and age groups within our country, consider the complexity of adaptations that might be necessary for implementation of social technologies disseminated and implemented on a global scale. This is the focus of the work of Galavotti et al. (2008), who describe their efforts to adopt and implement an HIV/AIDS prevention program, Modeling and Reinforcement to Combat HIV/AIDS for unique contexts in several African countries. The primary adaptations were in the content of mass media campaigns. As such, these adaptations were in the surface structure of the innovation—the basic program model was kept intact across settings.

Rather than view such adaptations as outside of the Framework (deviations from faithful implementation of validated innovations), the authors suggest that facilitating planned adaptation should be one of the roles for the Prevention Support System. As such, the PSS would not be simply the broker of established programs, but it would help potential users analyze the innovation for fit and possible adaptation. A competent PSS with experience in providing support to numerous service delivery systems in varied settings, would presumably have a combination of experiential and empirical knowledge of what kinds of adaptations are necessary (or at least acceptable) in what settings. This would not only ease the burden of the delivery organizations in having to figure this out, but would discourage the organization from moving towards adaptations that might be detrimental to the integrity and effectiveness of the innovation. As such, the PSS would be charged with both supporting adaptation and fidelity to the program model/theory.

If the PSS were to take on this additional role, it would suggest one of several possible intersections between the PSS and the Prevention Synthesis and Translation System. Already, I have suggested that the elements and variations of the PSS should be the object of outcome research to determine best support practices. The research community and the PSS should collaborate to design such effectiveness trials. This additional issue of planned adaptation creates another possible collaboration between the systems. That is, if the PSS is facilitating the use of planned adaptation, the research system could be brought in to assess the outcomes of multiple adaptations in multiple settings in order to help facilitate future matches between setting and program adaptation.

Implementation and Outcomes—A Review of the Literature

None of this focus on implementation would be necessary if implementation were not related to outcomes. The increased attention given to implementation is partially the result of the maturation of the literature on effective prevention programs. To some extent, we now believe that if we take a validated program and follow the directions on how to use it, we are likely to achieve the intended outcomes. Durlak and DuPre (2008) have provided a great service by reviewing over 500 studies (most of which were included in several meta-analyses on this issue) in the literature that have implications for this implementation/outcome relationship and 81 studies that identify factors related to implementation. While the numbers of studies available on this topic is impressive in an absolute sense, it also indicates that the majority of outcome research studies do not address the issue of implementation.

Their review is quite conclusive that the degree of implementation is significantly related to the amount of positive change achieved by an intervention. However, they later note that the relationship has not been precisely delineated. They conjecture that there might be a maximum implementation threshold effect, such that above a certain level of implementation there may be no additional value to increased fidelity as a contributor to positive program outcomes. To this, we might also hypothesize the presence of a minimum implementation threshold below which fidelity does not matter, but above which it does.

If implementation is related to outcomes, what factors are related to implementation and what are the implications for practice and policy? The review goes on to identify how multiple elements in the Prevention Delivery System (many of which are indicators of capacity) are related to implementation. In addition, the Prevention Support System has a critical role to play. The 20 studies identified showed that training and technical assistance matter. Given the resources we devote to these processes, why is the literature on their effectiveness so limited? Again, we are left with the conclusion that of the three sub-systems in the Framework, the Prevention Support System is the most fragile, both in terms of its presence and implementation and in terms of our understanding of its effectiveness. Three papers in this special issue describe possible exemplars of the PSS, but we need to know more about their effectiveness.

Durlak and DuPre conclude their review with a number of recommendations, the implementation of which would advance the fields of practice and research on these issues. A number of these echo themes emphasized in this special issue (e.g., the importance of monitoring implementation, the role of culture in finding the balance between fidelity and adaptation). They also point out that our understanding of the processes of dissemination and implementation is limited by our measurement tools and methods. Implementation issues also have implications for outcome evaluation; it may be misleading at best to evaluate the outcomes of a program until implementation issues have been worked out.

The Framework in Action

What are the reactions of those “on the ground” to the dissemination and implementation of EBP? Julian et al. (2008) present the community perspective on the utility and operation of the Framework. The processes and associated challenges of the Framework are presented in the context of six counties’ experiences. They found reasons to be both encouraged and discouraged about the use of the Framework at the local level. On the positive side, they felt that local officials were relatively sophisticated and willing to use research to help guide their program decision-making. They also concluded that the service delivery systems available at the local level have the capacity to delivery science-based programming. However, they found the Prevention Support System to be “inadequately funded and valued,” a perspective echoed by Guerra and Knox (2008) who note that it is totally absent in some communities. The findings of Julian et al. were used to influence state funding and policy to enhance their prevention support systems. If the support system is to serve as the broker, the key link between research and practice, we must find ways to increase the capacity of this piece of the infrastructure, or we will continue to lament incomplete adoption and implementation of science-based programs and practices. Various federal systems have been created for this support role (e.g., CSAPs Centers for the Application of Prevention Technology), and other examples are presented in this issue, but they remain the exception rather than the rule in most communities.

Livet et al. (2008) also focus on the “end user”—the Prevention Delivery System. While creators of systems for implementation advocate for a planned rational approach to programming, not all organizations participating in the delivery system fully engage in this approach or have the capacities to do so. This paper identifies the organizational characteristics associated with use of four programming processes—planning, implementation, evaluation, and sustainability. Not surprisingly, leadership, shared vision, technical assistance, and advocates for the use of these processes led to their increased use. In a sense, this work looks at the implementation of frameworks in the way that other literature examines the implementation of programs. While this may seem more abstract, it may actually be more basic to the strength and functionality of the organization. The findings of this study suggest that the Prevention Support System should work on increasing these correlates of sound programming processes, when these capacities or characteristics are lacking in the Prevention Delivery System. More specifically, they might focus on developing effective leadership and facilitating shared vision within the delivery organization, while acting as an advocate for use of programming processes and providing technical assistance in their use.

The work of Lesesne et al. (2008) takes the use of the ISF to another level. While the other works in this special issue have superimposed their existing work on the newly developed framework, Lesesne et al. provide an example of how the ISF can be proactively and explicitly used to guide prevention work. Furthermore, as a source of Federal funding and policy, she and her colleagues at CDC were able to directly and indirectly influence all three systems of the framework. To over-simplify, CDC took the lead role in synthesizing and translating existing research on teen pregnancy prevention, funded intermediary organizations at the state and regional level who acted as support systems, and provided them with the guidance, resources, and tools for influencing local service delivery agents.

This paper illustrated several interesting dimensions of the use of the ISF. Lesesne et al. (2008) discussed the bi-directionality of the arrows connecting the subsystems. That is, while we think of the energy moving from research to support to delivery, they point out that the delivery systems can and should influence the work of the support systems and that both should help guide the work of the research community. Nor does all of the research come from academic settings—they used research syntheses created by grantee organizations. They also recognized the necessity in focusing on the connections between these systems—noting that practitioners are not always receptive to the services of support systems. Finally, they pose a daunting but necessary list of research questions regarding the use of the ISF which focus on the capacity and effectiveness of each system, the nature of their interactions, and the outcomes achieved through their actions.

Implementation and Community Psychology

One of the vexing, if not paradoxical, elements of our understanding of implementation is the apparent conflict of a pro-fidelity implementation message with community psychology values. That is, if we engage in the fidelity-adaptation debate we, as community psychologists find ourselves caught between two of our core values. Our belief in an empirical foundation for practice would move us towards the fidelity argument, as most of the literature seems to indicate that greater fidelity is associated with positive outcomes. On the other hand, we value community participation and the importance of context as contributors to practice, which would lead us to support adaptation. Whaley and Davis (2007) discuss these dual challenges of being culturally competent while adhering to evidence-based practice. The Durlak and DuPre review give us hope for reducing this tension, as they found that fidelity and adaptation were not necessarily polar opposites on the same continuum and that both were related with to positive outcomes when careful attention was given to what should be implemented with fidelity and what could, or should, be adapted. Thus, their review findings are consistent with the three papers (Galavotti et al. 2008; Guerra and Knox, 2008; Lee et al. 2008) that described valuable planned adaptations of effective interventions.

It is easy to assume that a strong fidelity message could be interpreted as a top–down mandate to “do as you are told.” However, the literature actually indicates that shared decision-making and local ownership are positively related to better implementation. Thus, those of us who simultaneously believe in empiricism, context, and local control can sleep semi-comfortably.

The Interactive Systems Framework—Challenges and Possible Directions

What does the ISF offer us and where can we go from here? The Framework pushes us in several new directions. First, it encourages our systemic thinking. Players within the Framework (researchers, trainers, funders, delivery agents) have huge responsibilities and obligations to fulfill which require concentration on the immediate tasks at hand. The Framework offers us all an opportunity to step back and examine how our work, wherever we are in the world of prevention, fits in with other key systems and their players. Like any other field, we suffer from too little systemic thinking, and the Framework challenges us to keep our eyes on the big picture. The Framework offers a structure which can help organize our work and our knowledge of prevention.

How can we further use this Framework? What additional work needs to be done to support the development and use of this Framework? First, I believe we have to recognize that the action is within the arrows of the Framework. That is, even as we work to develop the capacity of each system, we must pay at least as much attention to their interactions. How do those charged with supporting the delivery system access the latest work in the synthesis and translation of the latest research? From journals? Conferences? Websites? Which ones? What are the obligations of those conducting research and its synthesis and translation to assure that their work reaches audiences of interest? How do support systems find those in the delivery system ready and able to be trained, assisted, and supported? What are the most effective mechanisms of support for different delivery systems in different settings?

These interactive issues should not be limited to the dominant top–down direction of influence. Thinking from a community-based perspective, how do delivery systems inform support systems and the research community about innovations they have developed for working with the populations they serve or about their specific need for support and new research for the populations they serve? How do support systems express their desire for research syntheses on different topics?

Second, are there variations in the Framework that need to be created when considering the broad range of prevention—not just programs, but policies, principles, and processes? Some of the work in this special issue focused on culture or normative change as prevention, consistent with Levine’s (1998) focus on value change as the crux of meaningful prevention. How do the mechanisms or components of the Framework’s systems differ in this context? The Framework seems to be most immediately suited for the implementation of individually focused prevention programs (e.g., skill-training, mentoring, social support). How does the Framework need to be adopted, if at all, when we consider environmental change, such as social policy, as prevention? Who are the delivery system agents for policy change, for advocacy, or for public awareness campaigns and how can we best support them? For example, there is a movement to improve the nutritional environments of schools (e.g., changing vending machine options, improving cafeteria choices, reducing portion sizes) to prevent obesity. Who are these change agents? Parents? School nurses? Principals? Public health officials? How do their research and support needs differ from those that are implementing a structured curriculum in a classroom? Does this suggest a different framework, or simply adjustments to the content of the existing Framework?

Third, there is a myriad of evaluation research questions suggested by the Framework that should be addressed to assess the capacity and effectiveness of its operating systems and their interactions. To name but a few (see Lesesne et al. for others), what are the formats for research translation and synthesis that lead to optimal utilization? What support mechanisms/systems lead to the greatest adoption and implementation of evidence or science-based practices and programs? How can the research, support, and delivery systems work together to determine the optimal blend of fidelity and adaptation?

Finally, the Framework is purposefully descriptive, not directive. As such, it provides a mechanism through which we can discuss and understand how prevention moves from research to action, from action to research. But can we use it to create a mechanism for providing guidance to those who work within and across the three systems?

The power and interactive flexibility of the Internet gives us the opportunity to take this Framework to a utilitarian/directive level. Imagine the Framework as currently formulated, placed on a web page. Now imagine two menu options within each of the systems. The “Tools” option would open into multiple links for occupants of each system. For instance, the Tools menu within the Research Synthesis and Translation System would include links to the latest methods of meta-analyses. Within the Support System, it would include links to the types of support mechanisms described in this special issue (e.g., Getting to Outcomes, Communities That Care). Within the Delivery System, the Tools section might link directly into the Community Toolbox website or to other resources that are more specific to the content of the delivery system of interest (e.g., new violence prevention curricula available through CDC or NIJ).

The second menu link within each system would be for “Knowledge.” Thus, while the Tools section of the Research system linked to meta-analyses methods for researchers, the Knowledge section would provide links to the actual products of research syntheses and translation, including these meta-analyses. The Knowledge section of the Support System would include research on the effectiveness of support systems, including different models of training and technical assistance. The Knowledge section of the Delivery System would include research findings on effective delivery mechanisms, as well as program evaluations on specific preventive interventions.

In this way, the Framework would become not just a heuristic for understanding systems of prevention, but a repository of our increasing knowledge and tools. There could be menu options for frameworks that contain knowledge and tools in specific prevention content areas (e.g., violence, substance abuse, teen pregnancy), as well as a generic framework for work that is not content-specific.

Of course, imagining this possibility is only the first step. A great deal of time, money and additional thought would be necessary to implement such an idea. CDC has taken the lead in providing the resources and leadership necessary for the development of this Framework. Perhaps they would be willing to invest the additional resources to develop and maintain this interactive version of the Framework.

Field of Prevention Dreams

While in the world or prevention, it may never be as simple as “if you build it, they will come,” we can certainly facilitate the relationship between research and practice. If you build it, some of us will provide consultation on what “they” want it to look like, others will help find the people who might want to come, others will help them understand what to expect when they get there, others will chip in for gas, and when they arrive, others will usher them to the seats best designed to fit their individual contours.