Introduction

There is a clear need for reduction of youth problem behaviors and for positive youth development through broader dissemination of evidence-based prevention programs (hereafter EBPs). Results from the Centers for Disease Control and Prevention’s (CDC) annual Youth Risk Behavior Survey indicate high rates of problem behaviors that have negative social, health, and economic consequences (CDC 2011). The problem behaviors surveyed by the CDC range from substance misuse to violence to other health-risking behaviors. These behaviors inhibit positive youth development, are associated with family dysfunction, and exact a tremendous economic toll. For example, underage drinking alone was estimated to cost $68 billion annually in 2007 (National Center on Addiction and Substance Abuse 2011).

A report by the National Research Council and Institute of Medicine (NRC–IOM 2009) emphasizes that the negative results of these types of youth problem behaviors could be greatly ameliorated through broader delivery of EBPs. In this context, EBPs are defined as prevention programs tested in well-designed, methodologically-sound studies, with health outcome improvements demonstrated to be statistically and practically significant (see Flay et al. 2005). Surveys addressing the actual implementation of EBPs in many program delivery systems (e.g., public school systems, public health systems, social service systems) have shown that only small percentages of populations that could benefit from specific EBPs have the opportunity to do so (e.g., Merikangas et al. 2011; NRC–IOM 2009). The result is that EBP potential for achieving population-level impact, enhancing public health and well-being, is not being realized (Spoth et al. 2013b; Woolf 2008). This is especially true with the current scarcity of resources that typically fund EBP dissemination, such as federal and state grants. The purpose of this exploratory research was to conduct a survey-based evaluation of EBP implementation readiness in state delivery systems; it was part of a larger research project on a community-based EBP delivery system that is called PROSPER.

Potential of Extension and Its Linked State Systems for Broader EBP Dissemination

Historically, Cooperative Extension is an outreach system based in land grant universities that has been characterized as the largest informal education system in the world (Coward et al. 1986, p. 107), with reach into every state and county in the country. Moreover, translating program-related research into widespread practice is central to Extension’s mission; it has a relatively extensive program delivery infrastructure in all states (see Rogers 1995). This capacity and mission suggest considerable systems potential for dissemination and evaluation of evidence-based family and youth programming (Molgaard 1997; Spoth et al. 2004). Relevant literature has accumulated over the past two decades specifying how the Extension system offers opportunities for better translating EBPs into widespread community-based practice, especially when linked with other program or service delivery systems (e.g., Molgaard 1997; Spoth and Greenberg 2011; Spoth et al. 2015).

Reports on evidence-based programming in the Extension system (Fetsch et al. 2012; Hill and Parker 2005; Perkins et al. 2006), have underscored the potential of the Extension system for the broader translation of EBPs into community-based practice, particularly in collaboration with other systems that disseminate prevention programs (e.g., public education, public health, human services). This literature notes compelling arguments for increased Extension-assisted EBP dissemination, including: (1) fostering a higher degree of consistency between science-based programming and actual practice; (2) facilitating practitioners’ attention to the characteristics of scientifically-proven programming; and (3) enhancing scientist-practitioner collaborations.

Perhaps most important in consideration of Extension system potential for disseminating EBPs to enhance public health—especially when coordinated with education and public health systems—is the directly-relevant empirical evidence accrued from randomized controlled prevention trials. Most noteworthy in this context is a study of the PROSPER Partnership Model. The PROSPER Partnership Model is a delivery system for supporting and sustaining EBPs designed to promote positive youth behaviors and reduce negative or risky ones, as well as to improve related family functioning (Spoth et al. 2004); rigorous study supports its effectiveness and cost efficiency (e.g., see Spoth and Greenberg 2011; Spoth et al. 2013a).

This partnership model applies the existing and relatively stable base resources of land grant universities and Extension systems, as well as those of linked public school and public health systems, to the development and maintenance of community partnerships. Teams of community partners focus on delivering a family-focused and a school-based EBP in order to maximize the likelihood of producing community-level positive youth and family outcomes. The PROSPER research trial and associated studies have demonstrated: (1) community teams’ sustainability of evidence-based programming efforts for over 11 years; (2) community teams’ achievement of high recruitment rates for family EBP participation, compared to traditional approaches; (3) EBPs implemented with high levels of quality; (4) positive long-term effects for strengthening family relationships, parenting, and youth skill outcomes; (5) long-term effects for reducing youth problem behavior outcomes (both substance misuse and conduct problems); (6) reductions in negative peer influences indicated by social network analyses; and (7) cost efficiency, as compared with programming implemented outside of PROSPER partnerships, along with cost effectiveness (Spoth and Greenberg 2011; also see www.helpingkidsprosper.org).

The context for the development and conduct of the exploratory survey research reported herein was a series of projects funded by the CDC, the National Institutes of Health, and the Annie E. Casey Foundation that were aimed at developing strategies for increasing adoption of the PROSPER Partnership Model within state Extension systems, along with state agency partners (Education and Public Health) that disseminate EBPs. The funding supported a readiness survey of each state’s Extension system and companion surveys with key informants from the Departments of Education (DOE) and Public Health (DPH) in all states.

Readiness-Related Factors in EBP Dissemination

An extensive literature on organizational, community, and systems readiness has identified a number of readiness-related factors in EBP adoption, positive EBP implementation outcomes, and sustainability of EBP implementation (Chinman et al. 2005; Foster-Fishman et al. 2007; Hemmelgarn et al. 2001; Johnson et al. 2004; Ogilvie et al. 2008; Plested et al. 2006). Several recent studies highlight the critical importance of readiness assessments in prevention program support systems (e.g., Cooper et al. 2015; Flaspohler et al. 2012; Harris et al. 2012), particularly those entailing scientist-practitioner partnerships (Özdemir and Giannotta 2014). They also reveal gaps in the research on these readiness-related factors, including the need to better develop readiness measurements (Chaudoir et al. 2013; Emmons et al. 2012; Stamatakis et al. 2012). In this context, readiness has been operationally defined in various ways but commonly refers to an organizational unit’s or system’s ability to initiate and effectively implement innovative programming (see Weiner et al. 2008). Notably, despite the potential Extension has for disseminating prevention-oriented EBPs, researchers have identified a number of barriers concerning readiness within this complex system.

The readiness-related factors directly relevant to the Extension system, delineated in a growing literature (e.g., Betts et al. 1998; Dunifon et al. 2004; Fetsch et al. 2012; Hamilton et al. 2013; Hill and Parker 2005; Perkins et al. 2006), include: (1) limited financial resources and time (e.g., competing time demands); (2) perceptions that EBPs do not adequately address programming needs and that they are not necessarily superior to traditional programming; (3) inadequate Extension staff knowledge, training, and skills specific to EBP implementation, including lack of familiarity with the language and concepts of EBPs; (4) Extension staff resistance to change from their traditional programming roles (e.g., development of brief educational programming or materials in response to local community requests); and (5) difficulties in accommodating collaborations with scientists or academic departments that might be beneficial to EBP implementation and related program evaluation, particularly due to time constraints. The financial resource-related factor has become especially prominent in the last 4–5 years, as a result of shrinking federal and state budgets.

Following from the review of the literature on readiness and consideration of factors in adoption of the PROSPER Partnership Model, we focused on three key constructs: perceived need for collaboration, organizational capacity, and engagement in the programming of interest. To begin, there is an extensive literature on the general benefits of community collaborations (for a review see Foster-Fishman et al. 2001), and additional literature specifically highlights the critical role of collaborations in the community-based delivery of preventive interventions (Arthur et al. 2003; Hawkins et al. 2010; Kim et al. 2015; Roussos and Fawcett 2000; Spoth and Greenberg 2005; Wandersman et al. 2008). This literature concludes that community collaborations can be effective delivery mechanisms for prevention programming when they are focused on both community mobilization and the use of strategies grounded in prevention science. Although it has been conceptualized in varying ways, there also is a substantial body of literature suggesting that an organization’s capacity is a another key predictor of adoption and successful implementation of new practices such as prevention programming (Durlak and DuPre 2008; Elliott and Mihalic 2004; Fixsen et al. 2005; Flaspohler et al. 2008; Greenhalgh et al. 2004; Johnson et al. 2004). There is consensus that such factors as funding and human resources are key, including staff availability, skills, and training. Lastly, a smaller set of articles suggests that prior engagement in and experience with evidence-based prevention programming enhances the likelihood of adoption of newly introduced evidence-based programming efforts (e.g., Kim et al. 2015; Spoth et al. 2013b).

Gaps in the Literature and Related Research Questions

The literature review revealed substantial work on organizational, community, and systems-level readiness factors, as noted above. Within this body of work is the aforementioned literature on readiness factors in the Extension and other dissemination systems with which it may link (e.g., those related to collaboration, organizational capacity, and engagement), but many gaps remain in this literature. First, although there has been some Extension readiness-related survey research conducted in Washington and New York states, no national survey research could be found. In addition, no regional survey work was uncovered that would allow comparisons of readiness factors across Extension regions. Finally, no national readiness surveys of the dissemination systems with which Extension systems frequently link could be found. These gaps in the literature, along with research indicating the PROSPER Model’s effectiveness in disseminating EBPs, suggested the need for the surveys reported in this paper. The survey research was considered formative and exploratory, addressing three research questions mapping onto the research gaps noted.

The Extension system and companion agency surveys described herein were used to measure readiness-related barriers, along with those factors identified as central to successful implementation of the PROSPER Partnership Model. The first exploratory research question concerned national and regional Extension system staff readiness for prevention programming, particularly EBPs—indicated by engagement in such programming, perceived need for relevant collaborations, level of organizational capacity, and relevant training—along with the comparative strength of these indicators of readiness. The rationale for this research question is to address the readiness-related knowledge gaps indicated in the above literature review. A second question concerned differences across the four Extension regions in terms of levels of readiness. A third exploratory question concerned the comparative levels of readiness between state DOEs–DPHs and state Extension systems. The rationale for addressing the second and third questions is as follows.

An opportunity afforded by the national Extension readiness survey concerned the prospect of examining regional differences in readiness levels. In the national Cooperative Extension System there are four geographic regions that mirror the regional structure of the US Census and include the North Central, the South, the Northeast, and the West. Each Extension region has its own association and directorship that develops a set of priorities and standards related to outreach and evaluation in each core programming area. This renders it more likely that state Extension systems within the same region will have similar practices and standards of relevance to selecting and implementing EBPs, but ones that may vary across region. Another factor that possibly could create differences across the regions in prevention-related programming is that the perceived need for EBPs might vary across regions. For example, it may be the case that regions with higher levels of youth substance misuse are more inclined to seek out evidence-based prevention programs to address this problem. For these reasons, the authors chose to examine differences in readiness constructs across the four Extension regions.

Finally, there were several interrelated reasons for surveying representatives from the Departments of Education and Public Health, in addition to Extension. Although a national survey assessing training needs of the public health workforce concerning evidenced-based decision making recently has been conducted (Jacob et al. 2014), other relevant types of readiness assessments were not found. Most importantly, the design for the PROSPER Partnership Model entails active collaboration of Extension systems with DOEs and DPHs, as potential supporters of EBP delivery. Among currently delivered EBPs, financial and other forms of support often originate in these state departments. For example, survey research on programming for youth indicates that DOE-supported public schools serve as key implementers of EBPs and that an appreciable proportion of their prevention programming consists of EBPs (Hallfors and Godette 2002; Ringwalt et al. 2009). In addition, state DPHs often assume responsibility for administering EBPs that receive federal funding (e.g., block and other grants from the Substance Abuse and Mental Health Services Agency).

Because DOEs and DPHs could be potential sources of advisory, funding, and other forms of support for implementation of the PROSPER Partnership Model, assessing state DOE and DPH readiness factor levels in parallel with Extension system readiness was considered to be a critical part of assessing overall state EBP delivery readiness. In addition, the EBP survey literature cited above suggested that the DOEs and DPHs in many states were comparatively more ready for broader EBP delivery than were Extension systems, at least based on reported rates of EBPs implemented. The project’s DOE–DPH survey provided an opportunity to evaluate that expectation.

Methods

Extension System Survey Sample

The Extension system survey targeted employees of the youth and family program areas of the Cooperative Extension Systems in land grant universities. The sampling framework was limited by existing lists of employees gathered directly from open directories provided on the universities’ websites. A total of 5072 names were found to comprise the initial pool of potential Extension respondents. In states having fewer than 100 identified staff members, all identified Extension staff members were invited to participate; in states with larger systems, 100 were randomly selected and invited to participate. The final National Extension sample pool included 4,181 individuals.

Sample participants were well educated: 68.5 % had a master’s degree or bachelor’s degree with additional coursework and 11.2 % had a terminal degree. On average, these participants had been in their current positions for 10.6 years (SD = 9.4) and employed by their state’s Extension system for an average of 13.6 years (SD = 10.3). Ninety-five percent of the sample had full-time positions. Just over three-quarters of the participants (76.8 %) were community-based educators whose primary responsibility was to deliver family and/or youth programs, 6.5 % worked at a regional level within their state, and 16.8 % worked at the state level (state and regional level positions tended to be more administrative in nature).

Extension System Survey Administration

Prior to survey administration, state Extension Directors were informed about the project and were asked to encourage participation among their staff. A competitive incentive of $2000 was offered to the states with the highest response rates within each of three size categories (small, medium, and large Extension systems). In addition, $500 was offered toward professional development or training to a randomly selected respondent in each participating Extension system.

The survey was administered online via a secure web-server with a unique ID and password for each respondent. Data were collected over the course of a month. The response rate was 23 % (958 completed surveys, although data from 12 surveys were not usable). A review of the relevant literature suggested that this rate is consistent with response rates from similar studies using web-based approaches (Couper 2001; Dillman et al. 1998; Hamilton 2009).

DOE–DPH Survey Sample

The sample included DOE–DPH program administrators and implementers responsible for programs designed to prevent youth problem behaviors, particularly substance misuse. From the relatively limited pool of potential participants, 467 were identified and targeted for recruitment (aiming for a sample including four individuals from each department in each state, with approximately half representing each type of state department). Of the initial 467 potential respondents, 46 were subsequently deemed ineligible (e.g., primarily due to termination of employment or retirement), 41 refused participation, and 42 could not be reached, yielding an N of 338 (a response rate of 79 %). Approximately 87 % of the sample participants had a master’s or bachelor’s degree. On average, respondents were in their current positions for 6.9 years; about half (51 %) were in administrative positions.

DOE–DPH Survey Administration

The survey was administered via computer-assisted telephone interviewing. Depending on the availability of contact information, the respondents were first contacted via phone by a trained interviewer to either conduct or schedule the interview. A consent letter was read to all respondents at the beginning of the interview and, after obtaining the respondent’s permission to proceed, the survey was administered. Due to restrictions on monetary compensation to state employees, no incentive was offered to participants.

Survey Development and Measures

Constructs concerning readiness factors summarized in the Extension and broader literature were reviewed for purposes of constructing the survey reported in this paper. Many of the key constructs mapped onto recent publications addressing specific barriers and enablers of EBP implementation in an Extension context, including the Washington state survey conducted by Hill and Parker (2005). Constructs measured focused on readiness for a combination of prevention program implementation (particularly that involving EBPs) and related collaboration, as indicators of readiness of a PROSPER-like approach to prevention program dissemination. Measures related to these factors were adapted primarily from four sources. These sources included: Simpson’s Model of Systems Readiness (Lehman et al. 2002; Simpson 2002), Aarons’ Evidence-Based Practice Attitude Scale (Aarons 2004), the CYFAR Organizational Change Survey (Betts et al. 1998), and the PROSPER Partnership Network Community and Educator Readiness measures (PROSPER Partnership Network 2011).

Except as noted below, all measures utilized five-point Likert-type response scales, most of which assessed degree of agreement or level of importance. In all cases, lower values indicated lower levels of readiness, with a value of 3 indicating neutral or “mixed” responses. The only items that were measured differently were those addressing staff training and development; those items utilized a nominal response scale with four categories (No training/not applicable = 1, Applicable, but no training = 2, Adequate = 3, Too much = 4).

A series of factor and reliability analyses was conducted. The goal of the first principle components factor analysis was to identify broad content areas addressed by the items in the survey. The scree plot resulting from this analysis suggested that there were six factors. Four of these broader factors emerged as being most relevant to assessing readiness and were used in subsequent analyses (see Table 1). The first primary factor—state engagement in prevention programming—included 17 items (e.g., “I know where to go to find information on evidenced-based programs…,” α = .87), the second primary factor—perceived need for EBP-related collaborations—had 9 items (e.g., “Based on my perception of our statewide needs for evidence-based programs and related partnerships, we should do more to facilitate partnerships between state—) and county-level staff to support community prevention programming,” α = .90), the third factor—organizational capacity—consisted of 25 items (e.g., “Our… staff have enough time to complete assigned duties,” α = .89), and the fourth factor—perceived need for training—consisted of four items (α = .61).

Table 1 Readiness factor scales and subscales for the Extension and DOE–DPH surveys: number of items, reliabilities and percentages of higher/lower scores

Following the identification of the four primary factors, an additional series of factor analyses was conducted to identify sets of items comprising subscales within the primary factors expected to be of most relevance to successful adoption of the PROSPER Partnership Model. Scree plots suggested three subscales for the state engagement in prevention programming factor and three subscales for the organizational capacity factor (not all items from the primary factors loaded onto the identified subscales). There were no subscales identified for either the perceived need for EBP-related collaboration or the staff training and development factors (see Table 1). Reliability coefficients ranged from .71 to .85 across the six subscales.

The DOE–DPH survey development proceeded through a parallel process. Due to the similarities in items between the Extension and DOE–DPH assessments, the initial principle component factor analysis resulted in corresponding primary factors, with the exception of staff training and development, since it was not included in the DOE–DPH survey. The follow-up factor analyses conducted for each factor suggested that there were two subscales for the engagement in prevention programming factor, three subscales for the organizational capacity factor (not all organizational capacity factor items loaded onto its subscales), and no subscales for the perceived need for EBP-related collaborations factor. See Table 1 for more detail on DOE–DPH factors and subscales.

Analyses

Descriptive data analyses were performed to answer the first research question concerning readiness scores at the national and regional levels for Extension and DOE–DPH. McNemar Chi Square analyses then were conducted to assess differences in proportions of respondents with lower- or higher-level readiness among the primary readiness factors. In order to address the second research question concerning regional differences across the readiness factors, a series of one-way ANOVAs and post hoc comparisons were conducted, as summarized in the results section. Finally, t tests were conducted to address the third research question comparing readiness factor differences between state Extension systems and DOEs–DPHs.

Results

Extension System Readiness Factors

State Engagement in Prevention Programming

The national mean score on the state engagement in prevention programming factor scale was 2.93, which approximates the midpoint on the Likert-type scales in the survey and suggests relative neutrality or mixed perceptions concerning the level of readiness regarding this factor. For the purpose of conducting McNemar Chi Square analyses, a score of 3.5 was used to establish a cut-off point, above which scores suggest higher levels of readiness (Likert responses 4 and 5 indicate higher ratings on each of the specific readiness items). The McNemar Chi Square analyses indicated that the proportion of higher scores on this readiness factor was significantly smaller than the proportion of higher scores on the organizational capacity factor (χ2 = 135.19, p < .001) and the perceived need for EBP-related collaboration factor (χ2 = 526.41, p < .001, see Table 2).

Table 2 Extension national and regional mean comparisons across the primary readiness factor scales and subscales

There were significant regional differences on this factor overall (F = 8.131, p < .001), as well as on the support for prevention and commitment to evaluation subscales. For the overall factor, Tukey post hoc comparisons of the four Extension regions indicated that the mean scores for the Northeast (3.05) and the South (3.02) regions were significantly higher than the mean scores for the Central (2.85) and West (2.83; see Table 2).

Subscale scores generally were consistent with the pattern for the overall state engagement factor, with the Northeast and the South regions scoring higher than the national average, and the Central and West scoring lower (see Table 2). Significant regional differences were found on the support for prevention (F = 8.658, p < .001) and commitment to evaluation (F = 8.879, p < .001) subscales (see Table 2). For the support for prevention subscale, the mean scores for the Northeast and South regions were both significantly higher than the mean score for the West, with the South region mean also exceeding that of the Central region. For the commitment to evaluation subscale, the mean scores for the Northeast and South were significantly higher than the mean scores for the Central and West. There were no significant regional differences for the knowledge of EBPs subscale.

Perceived Need for EBP-Related Collaboration

The national mean scale score of perceived need for EBP-related collaboration was 3.89, above the scale midpoint of 3.0 and highest among the factors assessed on a five-point scale. There were significant regional differences on this factor score (F = 6.474, p < .001; see Table 2), with the Northeast region producing the highest mean score on the overall factor (4.08), indicating a relatively higher level of perceived interest in and need for increasing and improving collaborative efforts than in the other regions.

Organizational Capacity

For the overall organizational capacity scale, the national mean score was 3.34, slightly above the scale midpoint (see Table 1). McNemar Chi Square analysis indicated that the proportion of high scores on this readiness factor was significantly smaller than the proportion of high scores on the perceived need for EBP-related collaboration factor (χ2 = 263.94, p < .001). Regional differences for this factor scale also were significant (F = 4.005, p = .008). The Northeast and the South scored significantly higher than the West (see Table 2). Notably, the subscale focusing on perceived resources produced the lowest subscale scores, with the national average of 2.48 and all regions falling into a relatively lower range (see Table 2). A significant regional difference also was found for that subscale (F = 2.743, p = .042), with the mean score of the Central region exceeding that of the West (see Table 2). Significant regional differences also were found on the collaboration experience subscale (F = 12.72, p < .001); mean scores for the Northeast and South regions were significantly higher than the mean score for the West, with the South region mean also significantly higher than the Central region mean (see Table 2). There were no significant regional differences on the system openness to change subscale.

Staff Training and Development

Concerning the staff training and development factor, the national and all four regional scores registered in the no training to adequate training range, or below the “adequate” level, on average. Scores ranged from 2.54 (West) to 2.71 (South). Regional differences were statistically significant (F = 11.111; p < .001), with the South region producing a mean significantly higher than means for the other regions (see Table 2).

Parallel DOE–DPH Readiness Factor Scores

The DOE–DPH sample means were generally high across the assessed primary readiness factors at both the national and regional levels, and particularly so for state engagement in prevention programming and EBP-related collaborations factors, for which mean scores exceeded 4 (see Table 3). The McNemar Chi Square tests indicated that the proportion of higher scores for the state engagement in prevention programming and perceived need for EBP-related collaborations factors were significantly greater than the proportion of higher scores for the organizational capacity factor (χ2 = 42.61, p < .001), and the proportion of higher scores on the organizational capacity factor (χ2 = 45.62, p < .001 and χ2 = 1.78, p = .18, respectively). Notably, DOE–DPH respondents scored significantly higher (all p’s < .001) on all factors than did Extension system respondents (see Table 4). In addition, relative to Extension system survey results, variations in mean scores across regions tended to be somewhat smaller, with no significant regional differences detected (see Table 3).

Table 3 DOE–DPH national and regional mean comparisons for the primary readiness factors and subscales
Table 4 DOE–DPH and Extension mean comparisons across the primary readiness factor scales and subscales

Discussion

Overview of Findings

The Extension system survey results suggested that, in general, the levels of readiness for prevention-oriented EBP implementation were moderate, across state systems. Relatively stronger readiness ratings on the perceived need for EBP-related collaborations were observed, although the score derivation from ordinal scales and the varying distributional properties of the different readiness factor scores constrain precise comparisons among factor scores. That said, the weakest readiness subscale scores concerned resources for EBP implementation and, relatedly, sub-optimal readiness also was indicated concerning staff training and development. There were significant regional differences on all primary readiness factors, generally favoring the Northeast region, with the West region showing the lowest scores on three of the four factors. DOE–DPH representatives indicated significantly stronger readiness, compared with representatives from state Extension systems, on all factors, as well as showing somewhat more inter-regional consistency in levels of readiness (no significant differences across the regions corresponding to those of Extension were found).

The literature review highlighted a number of barriers to Extension system readiness (e.g., Hill and Parker 2005; Fetsch et al. 2012; Hamilton et al. 2013) that, generally speaking, comport with the survey findings. Although some of the barriers noted in the literature were not specifically measured (e.g., familiarity with the language and concepts of EBPs, and related evidentiary standards), others—such as inadequate staff training, resistance to change, competing time demands, and limited financial resources—are consistent with the findings from the present national survey study. Another parallel with the literature worthy of note is the relatively lower level of commitment to program evaluation, a barrier that was indicated in connection with limited collaboration with academic departments. To place this finding in context, recent survey research conducted with New York Extension educators (Hamilton et al. 2013) underscored how competing time demands is the greatest barrier to research involvement and how such involvement is especially limited in the youth programming area. However, consistent with a “mixed picture,” it also is noteworthy that a key subset of the readiness-related strengths of the Extension system (e.g., concerning stronger perceptions of the need for collaboration in general) suggested by the literature reviewed were measured and, for the most part, supported.

Regional Differences in Readiness

As reviewed in the introduction, there are a number of reasons it was expected that there would be Extension system regional differences in readiness, including the varying region-based programming priorities, standards and practices. Regional differences in readiness were confirmed but the reasons for the specific pattern of differences observed are not entirely clear. As noted, on most of the primary readiness constructs, the Northeast region had the highest readiness scores. Perhaps some differences (e.g., commitment to evaluation) are related to proportions of Extension positions that entail faculty appointments in this region, if those with such appointments are more invested in EBPs and program evaluation. In addition, 4-H programming in the Northeast region is more likely to involve school-based programs and non-traditional 4-H programming than it is in the West, for example, where it is often is linked to more traditional, club-based programming (D. Perkins, personal communication, February 2014).

In this context, it is interesting that, in contrast with results from the Extension system survey, there were no significant regional differences in the DOE–DPH survey. It is difficult to know how to explain the relative lack of differences. Although lower statistical power resulting from the smaller sample of the DOE–DPH representatives relative to the Extension sample likely played a role, it may also relate, in part, to the decentralized organizational structure in the Extension system (see Rogers 1995). In this regard, education and public health mandates and requirements for DOEs and DPHs at the Federal level might contribute to greater similarities in the measured readiness factors across states and regions. If decentralization were relatively greater for Extension than DOE or DPH, it would allow for relatively more variability in state system functioning that is sensitive to geographic, economic, cultural and other conditions (e.g., number of suburban/urban areas) unique to the regions. Moreover, the level of Extension staffing resources varies by region, with the West region having the lowest number of youth and family educators. Higher numbers of staff in other regions may influence readiness, both directly and indirectly (e.g., allowing for more EBP-related collaborations, in addition to more staff to implement EBPs).

Comparison of Extension and Education/Public Health Readiness

The DOE–DPH survey indicated that these organizations have relatively strong scores across all readiness factor scales and subscales, showing significantly higher scores than did Extension systems. Methodological considerations discussed below render it particularly difficult to draw any definitive conclusions about the reasons for these differences. Nonetheless, the pattern of findings is consistent with the influence of policies promulgated by the federal agencies that provide funding for state DOEs–DPHs and have increasingly emphasized the need for broader use of funding for EBP implementation (see Spoth et al. 2013b). This policy influence, partially exerted in connection with funding for state programming, may be stronger than it is in the case of the USDA program-related funding that partially supports state Extension systems. In this connection, a recent report (Shapiro et al. 2015) highlights the importance of organizational linkages in the dissemination of EBPs. Considering DOE–DPH missions and the related Federal policy support, existing organizational linkages focusing on prevention programming might be more prevalent in those two departments, as compared with the Extension system.

Salient Findings on Collaborations and Resource-Related Capacity

Study surveys were conducted in the context of the economic downturn that began in 2007–2008. The authors had seen or heard numerous media reports of state budgetary reductions at the time. In this context it was not unexpected that resource-related scales showed relatively lower scores, across study surveys.

A kind of validation of the impact of resource and related time constraints was very salient in subsequent phases of the project in which the reported surveys were an early research activity. That is, key state stakeholders who subsequently learned about the prospect of supporting broader EBP implementation in their state through PROSPER indicated high levels of readiness on factors similar to those measured in the surveys, but were greatly constrained by budget cuts and other resource limits. The impact of those constraints was underscored by state stakeholder reactions to possible economic benefits associated with EBP implementation (comparative cost efficiency, cost effectiveness and cost benefits). These reactions suggested considerable readiness for EBP implementation projects, but not sufficient enough to supersede the resource constraints. In the Extension case, this is especially noteworthy in light of the literature on the stated priority of efficient use of resources (e.g., Dunifon et al. 2004; Hill and Parker 2005). That is, the potential of a PROSPER-like model for improving the cost efficiency of programming cannot be realized without initial resource investments that are forestalled by the immediate lack of resources.

Another interesting pattern of findings concerns perceived need for EBP-related collaborations. Across state Extension systems and DOEs–DPHs, this readiness factor showed relatively higher scores. This finding bodes well for broader preventive EBP dissemination, at least in some respects. It is interesting, however, to place the pattern of findings in the context of the literature on EBP-related collaboration in Extension. That is, while positive Extension staff attitudes toward collaboration in general are highlighted in the literature; it also is noted that collaborations with academic departments and with individual researchers on evaluation projects have not necessarily been readily accommodated (Hamilton et al. 2013; Hill and Parker 2005). This type of evaluation-specific collaboration is encouraged in federal-level policy regarding prevention program implementation; it also is integral to EBP delivery models like PROSPER. From this perspective it is noteworthy that commitment to evaluation also had relatively lower scores in the Extension system survey, consistent with evaluation-related collaboration barriers noted in the general literature and with earlier state Extension system surveys (Hamilton et al. 2013; Hill and Parker 2005).

Limitations

The literature reviewed emphasizes a number of limitations with readiness measurement, including the need for briefer, theory-based, more user-friendly measures demonstrating stronger psychometrics (Chaudoir et al. 2013; Emmons et al. 2012; Stamatakis et al. 2012). These and other measurement limitations and challenges are especially salient when addressing prevention programming at the systems level. This survey study highlighted such challenges, particularly concerning EBP implementation supported through the complex, dynamic, multi-leveled organizations surveyed.

It is important to note that there were no existing measures specifically designed to evaluate the readiness of an Extension system or a DOE–DPH to adopt and implement the PROSPER Partnership Model. In addition to their dissemination-related importance in the literature summarized in the introduction, the measures used for this study were selected because they were related to key components of the PROSPER Model. Higher scores on these indicators were expected to reflect higher levels of readiness for successful PROSPER Model implementation. In order to answer specific questions about the PROSPER Model, the respondents would have needed more Model detail and this was not feasible as part of the reported research endeavor. Thus, we adapted existing measures that were determined to map onto the key components of the PROSPER Model, to serve as proxy indicators for readiness to adopt and successfully implement the Model. The factors that emerged exhibited reasonable reliability scores, but the validity of these measures as they relate to readiness for PROSPER Model implementation needs to be determined in future studies.

Finally, given the reality of complex, multi-level organizations like those surveyed, it is difficult to assess the organization’s readiness on a global scale. In this study, representatives from all levels (i.e., community, regional, and state) within the Extension system were surveyed, but there was no viable way to account for potential differences in perceived readiness across these different levels, given constraints of the current survey research. It is possible that staff working at the community level may have different ideas about some of the factors being studied than those who work at a regional- or state-level within the system, such as capacity and the need for collaborations. Items related to knowledge of EBPs and commitment to evaluation might receive higher scores among those working at the state-level who have more contact with university researchers and the scientific community.

Given the size of the sample that was targeted for the survey of state Extension systems, a web-based survey approach was the only viable method to collect these data. Albeit typical, response rates for the Extension system web-based survey indicate a large percentage of non-respondents. Since we do not know how similar non-respondents are to the respondents, caution should be taken when drawing conclusions from the results. In this connection, given that the DOE–DPH representatives were contacted for phone interviews, their response rates were much higher than the Extension-based respondents who were sent a survey invitation via email. However, DOE–DPH respondents were only asked a subset of the items that the Extension-based respondents were, so the factors and subscales for this sample were based on fewer items. And finally, the DOE–DPH respondents were more likely to have administrative roles and to be located at the state level, as compared with Extension respondents who were mostly located at the community level.

Conclusions and Implications

Overall, the findings present a mixed picture of readiness for broader EBP dissemination in Extension systems and linked state education and public health systems. Specifically regarding the Extension system, at one and the same time survey results underscore readiness-related strengths but, also, highlight challenges related to existing levels of readiness and, especially, strategies for optimizing readiness.

The critically important challenge of limited training, financial and other resources to support prospective EBP implementers in their respective organizations is particularly salient. In the context of the aforementioned negative effects of the economic downturn, with its concomitant constraints on state and federal budgets, it is noteworthy that literature reviews highlight how EBP dissemination support systems are underdeveloped, underfinanced and under-researched (e.g., Kerner et al. 2005; Spoth et al. 2013b; Wandersman et al. 2012). A related implication is the need for innovative funding mechanisms for EBP dissemination support systems, including their readiness assessment and enhancement components, such as has been recently recommended by the Institute of Medicine (IOM–NRC 2014) and funders (e.g., Langford et al. 2012). It is especially important to conduct further research on readiness measures and strategies for readiness enhancements in existing dissemination systems like Extension, DOE, and DPH, in order to better realize their EBP dissemination potential. Further research using the data sets from the present study entails a more in-depth evaluation of organization management practices (Chilenski et al. 2015) and of differential levels of readiness among Extension-based educators in different program areas (Perkins et al. 2014); they represent steps in addressing the limited research to date.

In this vein, it also is important to note that many findings did suggest the potential of the surveyed systems for enhanced dissemination of EBPs to improve their public health impact, especially when working in combination. The fact that DOE–DPH survey respondents scored significantly higher on all readiness factors and subscales than state Extension system respondents suggests that DOEs and DPHs can be valuable partners for Extension systems that are interested in pursuing prevention programming. The relatively weaker readiness in state Extension systems notwithstanding, findings such as those from the PROSPER prevention trial project highlight the system’s potential for enhancing public health through broader EBP implementation, indicating related system strengths, such as outreach capacities, connections to well-resourced educational organizations, and commitment to the translation of research to practice.