Introduction

Place-based initiatives have been used to improve health and development outcomes in disadvantaged children since the 1990s (Moore, 2014). State- or nation-wide examples include Sure Start in the United Kingdom (UK) (Melhuish et al., 2010), Smart Start and First 5 in the United States of America (USA) (Bates et al., 2006; Bryant et al., 2003), and the Communities for Children Facilitating Partner initiative (CfC) in Australia (Edwards et al., 2011) — the focus of this study. Place-based initiatives involve the coordinated provision of programs and services to address complex social or economic issues within defined geographical areas (Wilks et al., 2015). They target problems that are prevalent and concerning to the whole community, with the selection of programs informed by measures of the child, family and community factors that influence children’s health and wellbeing (Raphael, 2018). Goals and objectives are typically determined through a locally and collaboratively driven, “bottom-up” approach of identifying the best available mix of programs to address local community needs (Edwards et al., 2011; Melhuish et al., 2010). While the programs (e.g., parenting efficacy, child literacy, kindergarten quality improvement, improving service access) to address targeted outcomes (e.g., child/maternal physical and mental health, school readiness, universal service quality) vary between place-based initiatives, all are aimed at making a vital difference to the life chances of children in the areas in which they are run.

Place-based initiatives’ service models vary between and within countries (Burgemeister et al., 2021). While most focus on strengthening universal services, CfC addresses gaps in current service delivery. Funding and delivery structures also vary. For example, place-based initiatives can be wholly funded and delivered by government organizations (UK), or governments can work in partnership with philanthropic and corporate partners with shared responsibilities for funding, governance, and implementation (Ireland, Canada, USA). CfC is funded by the government and contracts non-government organizations to deliver at the community level. Despite these varying approaches, common elements exist across design and delivery (flexible delivery, local autonomy, joined-up working, and considered governance), program implementation (capacity development, adequate lead times, and a long-term focus) and evaluation (establishing causality and attribution, having a theory of change, accounting for residential mobility, and analyzing cost-effectiveness) (Wilks et al., 2015). Evidence of the effectiveness of place-based initiatives has been mixed (Edwards et al., 2014; Kelaher et al., 2009; Knibbs et al., 2013; National Evaluation of Sure Start Team, 2012), and several large initiatives internationally have imposed “top-down” requirements about the use of evidence-based programs (EBPs) and services, with the expectation that this would ultimately improve outcomes for children and families (Edwards et al., 2014; Ladd et al., 2014; Melhuish et al., 2010). This is part of a broader trend as governments increasingly rely on legislative, policy and funding mechanisms to encourage the use of evidence-based interventions to achieve positive social change (Chamberlain, 2017; U.S. Department of Health and Human Services, 2018; Weiss et al., 2008). Evidence for “what works” to improve whole-of-community outcomes for children and families is growing (Fixsen et al., 2005; Fox, 2015). Successes have been reported for community-level adoption of specific EBPs (e.g., the positive parenting program (Triple P) (Prinz et al., 2009, 2016)), and for place-based approaches that adopt EBPs selected by the community (e.g., the Pennsylvania EBP Initiative (Bumbarger & Perkins, 2008)). However, the acceptance and widespread use of EBPs in community settings remains low. Many communities prefer to use their own “home-grown” programs, arguing that the available EBPs are not a good fit for their local context (Ghate, 2018; Weiss et al., 2008). Deciding whether to utilize a home-grown program or adapt an existing EBP to suit local needs remains a tension in evidence-based practice. Studies have shown that home-grown programs with a sound “evidence-informed” theoretical underpinning are more acceptable to communities and can be as effective as “imported” EBPs (Ghate, 2018; Leijten et al., 2016), and that scaling up programs using a mix of evidence-based and evidence-informed programs can have a positive impact on child outcomes at the population level (Southam-Gerow et al., 2014).

Previous studies have described several reasons for the persistent research-to-service gap: lack of trust in the “evidence”; resource constraints; systems and infrastructure impediments; perceptions that implementing EBPs is onerous; challenges with program adaptation; and organizational culture (Bumbarger & Perkins, 2008; Ehrhart et al., 2014; Fixsen et al., 2013; Ramanadhan et al., 2012; Weiss et al., 2008). To address these issues, a wealth of implementation models and frameworks have been developed. For example, following a review of five public systems’ efforts to scale-up EBPs, Fagan and colleagues (2019) developed a model containing several factors affecting scale-up: statutory endorsement and funding; public awareness and support for EBPs; community engagement and capacity; leadership and support; a skilled workforce; data monitoring and evaluation capacity; and developer and funder capacity. Fixen and colleagues (2016) refer to these types of factors as implementation drivers which fall into three categories: competency, organization, and leadership. Some drivers may act in a compensatory manner to counteract deficits in others.

Few studies have focused on implementation in disadvantaged communities. In one such study, Hodge and Turner (2016) conducted a comprehensive review of the facilitators and barriers of EBP sustainment being “maintained at least 2 years after training/implementation” (p194). Only 43% of studies included in their review successfully sustained programs. A conceptual framework was developed containing 18 sustainment factors under three themes: program characteristics, workplace capacity, and process and interaction factors. They emphasized the value of program developers, researchers and service providers working in partnership to plan implementation and sustainment using this framework as a guide. A recent qualitative study on the implementation of the incredible years parenting program in disadvantaged settings in Ireland found similar factors, and additionally noted the importance of parent retention and screening for parental readiness to attend programs (Furlong & McGilloway, 2015). A lack of organizational readiness for change can also impede implementation efforts. This is described by Weiner (2009) as a combination of change commitment and change efficacy. Consideration of organizational readiness may be particularly important when new evidence-based requirements are introduced to mature place-based initiatives that have previously had considerable autonomy over their program selection decisions.

Few studies have examined the drivers of implementation for established multisite, multi-organizational initiatives. For Sure Start, a 2011 review (Allen, 2011), resulted in Sure Start Children’s Centers being required to use EBPs as part of their mix of services (Goff et al., 2013). A 2013 evaluation found that, while the use of EBPs was widespread, they were expensive to run, had limited reach, and program fidelity was poorly understood (Goff et al., 2013). Likewise, an evaluation of EBP use in First Steps in the USA raised concerns regarding program fidelity (Compass Evaluation and Research, 2015). In 2004 when CfC commenced, considerable local-level flexibility was allowed in determining which programs and services would meet community needs (Edwards et al., 2011). A longitudinal evaluation of CfC found little evidence of benefits for children, families or communities, and tentatively concluded that greater use of evidence-based interventions within the initiative would improve its effectiveness (Edwards et al., 2014). Thus, a policy decision was made, requiring initially 30%, rising to 50%, of service funding to be spent on EBPs, and included a mechanism for communities to put forward home-grown programs for potential inclusion in this quota (Hand, 2017). While all sites met the policy requirements, and several internal reports have examined aspects of the implementation of this evidence-based policy change (ACIL Allen Consulting, 2016; Robinson, 2017), little else is known about the experiences of those involved in implementing the policy.

Obtaining an in-depth understanding of how evidence-based policies are perceived by the personnel charged with overseeing their implementation is an important step in forging stronger links between research, policy and practice (Ramanadhan et al., 2012; Weiss et al., 2008). In contrast to direct service providers, who are required by funding agreements to implement EBPs, government researchers, policy analysts and administrators are part of the “system” that enact such decisions, such as through the interpretation of research evidence, the development of guidelines, reporting and compliance systems, and contractual mechanisms (Rodriguez et al., 2018). Their attitudes, capabilities and leadership support can assist or hinder the effective and sustained use of EBPs at the community level (Boaz et al., 2008; Branch et al., 2019; Rodriguez et al., 2018; Van Dyke & Naoom, 2016). Obtaining the perspective of government-level personnel can also provide critical “arm’s length” reflections about how a multisite, multi-sector initiative approached implementation, including commonalities and differences across communities.

The purpose of this study was to examine the experiences of implementation of EBPs in the Australian Government CfC, a place-based initiative in Australia. This qualitative descriptive study (Sandelowski, 2000) explores the views and experiences of the government personnel involved with overseeing and supporting the implementation of the EBP policy requirement in CfC. A qualitative design was used to obtain an in-depth understanding of how this cohort responded to the policy and their perceptions about community-level acceptance and implementation. The study addressed the following questions: (1) How do government personnel overseeing the implementation of an EBP policy understand evidence-based practice; (2) What are government personnel’s perceptions about the imposed use of EBPs; and (3) What are government personnel’s perceptions of the factors influencing implementation of an EBP policy requirement? We then link and compare our data to two existing frameworks and theories to further develop our understanding of the effective implementation of EBPs in place-based initiatives (Morse, 2020). Specifically, we compare our findings to Hodge and Turner’s (2016) sustainment framework and Weiner’s theory of organizational readiness for change (Weiner, 2009).

Method

Study Setting

The CfC place-based initiative is the setting for this study. Introduced by the Australian Government in 2004, it currently operates in 52 geographic locations (CfC sites) across Australia, chosen based on multiple criteria for socio-economic disadvantage (Muir et al., 2010). The Australian Government Department of Social Services (DSS) coordinates CfC from its national office in Canberra, with direct administration provided by DSS state and territory offices across Australia. The Australian Institute of Family Studies (AIFS), located in Victoria, provides research, program and practice support. CfC aims at providing programs and services to address the unmet needs of children aged 0–12 years old and their families, improve service coordination, build community capacity, and improve the designated local communities. The CfC logic model includes an explicit focus on funded service coordination and cooperation in communities, with local communities determining the types of programs and services delivered based on community needs (Muir et al., 2010). CfC sites are located in metropolitan cities, regional towns, and remote rural locations and the population demographics and identified needs vary substantially. At each CfC site, a non-government organization (known as a “Facilitating Partner”) is contracted to work with the community to determine local needs and is responsible for the overall facilitation and management (Australian Government, 2014). They sub-contract other non-government organizations (Community Partners) to deliver services which may include parenting and family support, early childhood programs, family violence services, adult education and employment pathways, home visiting, and support for families from specific cultural backgrounds (Edwards et al., 2011). A review of CfC by Wilks and colleagues (2015) found all but one of the identified common elements of place-based initiatives (adequate lead times) were partly or fully demonstrated.

From July 2018, CfC sites were required to spend a minimum of 50% of service funding on EBPs. These can be selected from a “Guidebook” of pre-approved EBPs developed by AIFS. Alternatively, sites can submit programs for assessment (referred to as the “Program Assessment Pathway”) and those which are determined to be “Promising” can be included in meeting their evidence-based requirement. Promising programs must have the following clearly documented features: a theoretical and/or research background; a clear theory of change (e.g., program logic); specified program activities; positive findings from at least one pre-and post-evaluation; and availability of sufficiently trained staff (Australian Government, 2019). If programs do not meet these criteria, sites are advised on how to develop the evidence necessary for gaining promising status.

Study Design

This qualitative descriptive study (Sandelowski, 2000) involved semi-structured interviews with government-level personnel to gain an understanding of their knowledge of EBPs and their views about the introduction and implementation of the EBP policy in the CfC initiative. Interview methods provide the opportunity for in-depth exploration of participants’ views and experiences with open-ended questions allowing flexibility and probing (Kelly, 2010). Individual interviews were deemed the most appropriate method given the geographical distribution of participants and the sensitivity of site-specific data.

Participants

All national and state government personnel working on CfC (N = 44) were invited to participate via a personalized email. Seventeen personnel (39%) consented. Four were from DSS national office, with oversight of CfC nationally, including implementation of the evidence-based policy across all 52 sites, procurement and central monitoring of funding and service agreements. Ten participants were from DSS state offices in seven of the eight Australian states and territories. They were Grant Agreement Managers or other senior managers who worked directly with local sites to ensure contractual requirements were met. State office participants reported working with around 37 of the 52 CfC sites (71%) across their time with DSS, providing perspectives on the experience of a broad range of the country’s CfC sites including metropolitan, regional, and remote locations. Three participants were from AIFS and provided research and evaluation support to sites developing Promising programs. They reported direct contact with around half of the 52 CfC sites. Participants’ employment experience with CfC ranged from 4 months to 10 years (average 4 years). Thirteen participants were employed in their roles at or near the commencement of policy implementation. All participants were female.

Data Collection

A semi-structured interview schedule was developed, informed by the research questions and evidence from international literature (Supplementary Material 1). Example questions included: “Can you tell me what your understanding of evidence-based practice is?”; “What are your thoughts about the evidence-based program policy change?”; and “Some CfC sites have found it easier than others to implement the evidence-based program policy requirement. Why do you think that is?”. Interviews were conducted by the lead researcher (FB), face-to-face (N = 9) or by telephone (N = 8), between November 2017 and February 2018. Two participants requested to be interviewed together. Interviews were 30–60 min in duration and were audio-recorded. Key issues were raised and the interviewer’s reflective observations were recorded in field notes for use in the analysis, and to inform and refine subsequent interviews. As the interviews progressed, the research team discussed completed interviews and minor variations were made to the schedule to allow exploration of new topics in subsequent interviews (e.g., program fidelity). Audio recordings were transcribed and de-identified for analysis.

Data Analysis

Thematic analysis was conducted using the framework developed by Green and colleagues (2007) and supported by Saldaña’s coding manual for qualitative researchers (Saldaña, 2016). Using an inductive approach, the analysis involved four steps: immersion in the data; coding; creating categories; and identifying key themes. The lead researcher listened to and read the transcripts multiple times, and conducted initial coding of all transcripts, linked related codes into categories, and entered codes and categories into a manual coding table. Coded data included direct participant quotes and the interviewer’s observations, such as the degree of ease and familiarity with which the participant described the key policy constructs (e.g., evidence-based practice knowledge, understanding of the rationale for the policy) (Saldaña, 2016). In fortnightly meetings with the second author, transcripts and codes were reviewed and refined. In addition, 40% of transcripts were randomly selected and independently coded by one other co-author for rigor. Team discussions were then held about the codes and categories until consensus was reached. Codes and categories were further refined, and themes were identified to describe and explain participants’ knowledge, understanding and experiences of EBPs, supported by illustrative quotes. A final meeting with the research team was held to discuss and agree on the themes and sub-themes.

To confirm the credibility of the study findings, the themes and sub-themes, supported by participant quotes, were presented by the lead researcher and discussed at a meeting with key representatives from DSS, and at a CfC Facilitating Partners Forum attended by DSS, AIFS and community-level representatives. A small number of personnel work on CfC in the DSS national office or at AIFS. To avoid the potential risk of participant identification, quotes were identified by participant number only, with participant number removed in the one instance where the response is identified as associated with a national role.

We tested our findings against Weiner’s theory of organizational readiness (Weiner, 2009). According to this theory, readiness for change is a combination of change commitment and change efficacy. Change commitment is driven by the extent to which organizations value the change (change valence), while change efficacy is the appraisal of the organizational members’ capability to implement change, accounting for task demands, resource availability and situational factors. We reviewed the findings from our study and used these to categorize CfC personnel into types that could be compared to the following elements of Weiner’s theory: change valence, change commitment, change efficacy, change-related effort, and implementation effectiveness. We also compared our findings to Hodge and Turner’s (2016) framework for the sustained implementation of EBPs in disadvantaged communities. We created a table of Hodge and Turner’s 18 program characteristics, workplace capacity, and process and interaction factors, and cross-tabulated our findings with these factors.

Results

Six themes were identified from the data: (1) varying levels of knowledge; (2) responding to a big change; (3) implementation concerns; (4) meeting the evidence-based requirement; (5) contextual factors influencing implementation; and (6) workplace factors influencing implementation. Themes and sub-themes are discussed below.

Varying Levels of Knowledge

There was considerable variation in participants’ knowledge of the evidence-based practice. All participants working in national policy and research roles displayed a nuanced understanding that the CfC definition of evidence-based practice recognized the value of practice knowledge in the process of building evidence:

“[There’s] resistance in the sector to this really formal academic version of evidence-based. So [CfC] tried to build in that acknowledgment that there can be practice expertize as well…but, it still has to be tested and still has to be brought into that more formal evidence-based space.” (P national)

Several participants demonstrated an intermediate understanding broadly describing Guidebook programs as “…programs that have been tested and are robust and…achieve outcomes for the children and their families” (P17) or giving other technical definitions: “It’s practice based on pre-and post-evaluation that it works” (P2). Variation was also found in knowledge regarding the reason for the policy change. Many participants reported the documented reason, that the government “…wants to ensure better outcomes for vulnerable families” (P10). Other reasons included being accountable with public money; ensuring greater rigor and quality in service provision (with words such as “consistency”, “uniformity”, and “structure” used to describe this); the trend for increased professionalization of the sector; and the need to avoid costly mistakes in the context of limited resources: “There’s very little time, particularly within government funding…where you can just do something, make a mistake…and try a different way. Evidence-based allows that ability to hit the ground a little bit more” (P13). One state participant was unable to define evidence-based practice and two were unable to explain why the policy was introduced.

Responding to a Big Change

Participants expressed their views about the policy change as well as their perceptions of service providers’ responses. The transition from a locally autonomous model to the 50% evidence-based requirement was acknowledged to be “a really big change” that “shook up the sector”. The majority of participants had mixed feelings about the change—they saw the positives while also acknowledging the challenges. Some participants were unequivocally positive, describing the policy change as necessary, overdue, and reported positive perceptions of both the policy change and the focus on evaluation: “Well it’s exciting for them [service providers], in terms of the learning…you know, like it’s a great opportunity for them” (P2), “I think they see it [evaluation] as really important and something they want to do. So that they can perfect their model for their community” (P6). Others were more qualified in their support: “I’m really supportive of the shift…I just think it needs to be done in an appropriate way” (P9); or had a pragmatic attitude that recognized the practical imperatives that accompany government funding: “For a lot of organizations, this is their biggest income. They have to do whatever [the government] tell them to do” (P8).

Participants perceived “pockets” of strong resistance arising from: failure to recognize potential benefits; perceived limitations to providing responsive local services; difficulties in implementation and managing change; and entrenched beliefs that existing practices were effective: “They’ve got very set ways of doing things. They’ve run the same programs for a squillion years. They know they work. They know they work” (P11). The evaluation was also met with initial resistance and regarded by service providers as “a dirty word”, with some participants surprised at the level of animosity. It was seen as “too hard”, “someone else’s business”, “it does not help”, taking resources away from service delivery, and adversely impacting on relationships between services and their communities.

Acceptance of the policy improved over time, as service providers saw benefits. Participants reported that there was greater sharing of knowledge and practice, improved professionalism, and skill development that was “…coming up to a level … never seen before” (P13). A real change in understanding the value of evaluation was observed, although there was some lingering skepticism expressed by two participants about whether EBPs really changed things “much for the better” (P8). A small proportion of service providers also reportedly remained unhappy, “They still did it…not necessarily willing, but they did it” (P16), and this had made their journey harder. But overall, most participants reflected that the policy change, while not without challenges, had been worthwhile and had re-oriented CfC towards a focus on achieving better outcomes:

“[There is] now a widespread understanding of the importance of evaluation. There’s been a huge shift in people’s understanding of…the importance of reporting outcomes versus outputs.” (P1)

“It was a big change. And…I feel like it’s been a good change…it wasn’t a perfect policy, it wasn’t implemented perfectly, but I feel like we’ve muddled our way through to a pretty good outcome.” (P15)

Implementation Concerns

Implementation concerns following the policy change announcement comprised five sub-themes: tension between a “top-down” policy, “bottom-up” approach; clarity; achievability; resourcing; and regulatory burden.

Tension between a “top-down” policy, “bottom-up” approach

When first announced, the evidence-based requirement was considered to be inconsistent with a “bottom-up” place-based approach that “…examines community needs and then responds to them. So right from the start there was this sort of strange disconnect…” (P11). There were also concerns expressed that many EBPs are “kind of almost middle class”, more suited to the “worried well” who were able to attend structured sessions over a long period and would not engage disadvantaged families: “A lot of providers were saying we’ve got very transient families. Locking them down to a program that’s for a number of weeks doesn’t work” (P1). Conversely, one participant reported that, in the context of complex needs and community disorganization, services “…said they really like the structure…and it’s actually really helping them with how they work with families” (P9). While participants recognized that EBPs could work as part of a place-based model, they felt that approaches for combining these were “a bit rudimentary” and there was a “yearning” for additional support.

Clarity

The policy change was described as a “good theory” but poorly thought through with the perception that the selected percentage quotas were “arbitrary” and chosen without sufficient consideration of “the investment…and the training required…[and] how that actually would play out practically on the ground” (P4). Participants stated that initial messaging was unclear, particularly in relation to which programs met the requirements and how the Program Assessment Pathway would work. There was some “shifting of the goalposts” and the decentralized CfC structure impeded communication: “By the time information trickles through to the [services], who knows what they’ve been told…people are just really confused about what the hell they should be doing” (P13).

Achievability

Some participants assumed implementation would not be a “huge leap”, and that services were already evaluating their programs. But this “wasn’t the case at all.” The new policy coincided with other significant reporting and administrative changes, exacerbating stress on providers. There was a lack of clarity about the consequences of failing to meet the evidence-based requirement with the result that some providers “…always seem to have this heightened sense of concern” (P10). Participants sought to reassure them and recognized that: “We need to work with the sector to make sure that we’re not just imposing something that…puts too much pressure on them to try to comply and it leaves other parts of their service provision exposed” (P17).

Resourcing

Participants stated that no additional resources were provided to implement the policy. They felt service providers were already “resource stretched”, and the time and costs needed to select programs, train staff, evaluate programs and prepare submissions for program assessments was significant. Participants noted funding for evaluation had been “watered down” over time and was difficult to do well without dedicated resources.

Regulatory burden

Policy implementation was accompanied by regulatory burden and challenges monitoring whether the 50% requirement had been achieved in a dynamic environment. Sites were required to regularly report their service user data, as well as constantly “do the sums” to ensure their expenditure on EBPs stayed above 50%. Participants felt there was a “…need to really streamline [administrative processes] as best we can…” (P17).

Meeting the Evidence-Based Requirements

Participants described how service providers used different mechanisms to meet the evidence-based requirement. Some sites used only Guidebook programs, others used a mix of Guidebook and Promising programs, and one participant reported their remote site did not use any Guidebook programs. This theme explores the different aspects of meeting the 50% requirement through the following sub-themes: compliance; the value of the program assessment pathway; the role of evaluation; program fidelity; and the highly valued “other 50%”.

Compliance

Compliance with the 50% evidence-based requirement was a strong theme. Participants were surprised by the compliance requirements: “It came a little bit from left-field in terms of it being a compliance issue. We normally don’t have…that type of compliance” (P3). While selecting Guidebook programs were often perceived to be the “easy option”, broader benefits could result: “They’ve really come to this EBPs kicking and screaming, but in the end, they would…be the first to say that they have found some good fits” (P10). Compliance was an ongoing challenge for some sites, while others achieved and exceeded the target with ease. Workplace factors such as organizational culture, leadership and the degree to which organizations valued evaluation and evidence were seen by some participants as the key factors influencing the ease of compliance. Others described contextual factors such as rurality and diversity as playing a key role, as discussed in themes five and six.

Several participants reported that reliance on Guidebook programs caused stagnation, limiting opportunity for program refreshment, renewal and innovation, with the “big players” in service provision dominating in metropolitan areas. Limited options contributed to stagnation in rural areas and providers were more cautious about innovating with new programs: “It’s a big ship to turn around when the government’s breathing down their neck” (P10).

The program assessment pathway was described as hard work and time-consuming. Some participants framed this in a positive way: “[Some services] work really hard, with the motivation of getting their program through this process. Actually really engage in that process” (P11). More typically, it was described as arduous: “It’s been a really long and drawn-out process…” (P6). The criteria used for program assessment (see “Study Setting”) were regarded by some national participants as reflecting the “fundamentals of good programs” and were a “no-brainer”. Others, including national and state participants, said the criteria were too “narrow”. State participants identified a need for an alternative assessment pathway for services that do not fit the more “traditional” program mold, such as those focussing on “community development” and “building relationships”.

Some participants expressed concern that program choices were being driven by compliance and not community needs. Similarly, there was concern that the “rigor and purpose” of the policy was being lost: “I do have some concerns that it’s been, well, let’s do [program] because it’s evidence-based and we’ll meet our contractual requirements but it might not actually be what those families need” (P9). All participants agreed on the need for flexibility. Some suggested the 50% requirement was too stringent for sites with complex populations, while others wanted a loosening of the criteria for all sites and a move toward “evidence-informed practice”. Many participants expressed concern that a further increase beyond 50% may be on the policy horizon.

The value of the program assessment pathway

The inclusion of a pathway whereby providers could submit their own local programs for assessment had considerable benefits and was critical to the success of the policy implementation. For the first time, theories of change were developed for some longstanding local programs, which were then evaluated and shown to have positive outcomes. Participants reported that services who participated in the assessment pathway acquired a clearer understanding of how to improve outcomes and importantly, gained a sense of achievement and pride.

“The thing that I think has been really good, is it has started conversations in this sector. What is evidence? How do we measure whether we’re making a difference? Is there a logic behind what we’re doing?” (P15)

“They see it as a good thing. They see it as recognition of a lot of the work they were already doing.” (P6)

The role of evaluation

For a program to be accepted as “promising” and count towards the 50% evidence-based quota, it had to undergo evaluation. Participants observed some services were intimidated by this, that evaluation was “foreign”, they feared their program would not be shown to be effective and would result in a loss of funding and jobs. A few participants reported that evaluation quality had increased due to the policy change. More service providers were seeing it as a “continuous improvement” process, that was less about “ticking-the-box” and more about what was “working” for families. The policy was also seen to drive program innovation, particularly for complex populations. Participants provided many examples, including one site that sought more active inclusion of children: “So there’s been a number of innovations…even an online game…to collect evaluative data from children” (P1). Others expressed concern that services did not have sufficient evaluations skills either themselves or through external contractors who “…it turns out [haven’t] really been up to it” (P10). They highlighted concerns about the harm caused by poorly conducted evaluations (e.g., “shoving clipboards in people’s faces” undermines relationships with vulnerable families). Two participants said the policy focussed evaluation towards home-grown programs, whereas all programs, including those from the Guidebook, need to be evaluated in their context.

Program fidelity

Participants described a tension between program adaptation and fidelity, with an acknowledgment that more “practice knowledge” was needed to enable “rigorous and thoughtful” adaptation. Many examples were provided of sites cleverly adapting programs to the needs of specific communities whilst creating efficiencies. However, reservations were expressed about some adaptations, and whether fidelity to the original EBP was being achieved: “They’ve been allowed to cobble together stuff, but still claim the evidence-based status of the original program” (P12).

The highly valued “other 50%”

Programs and services that were not part of the 50% evidence-based quota were described as “really necessary” and used for: testing new approaches; continuing programs that did not meet evidence criteria; community development; and doing the things they “really wanted”. This “bucket” was often used for engagement activities: “There’s a lot of work, particularly with the most vulnerable clients…that has to go into building their capacity up to the point that they can actually meaningfully engage with these EBPs” (P6).

Contextual Factors Influencing Implementation

Contextual factors impacting on service providers’ ability to implement the evidence-based policy comprised five sub-themes: geographical factors; population diversity; state factors; service duplication; and program factors.

Geographical factors

Geography contributed to how effectively sites could adopt the policy. Regional/remote and metropolitan sites had different challenges. For regional and remote these included: the scarcity of staff and community partners to deliver programs; the logistics of delivery to remote communities; and seasonal infrastructure (e.g., impassable roads in the wet season). Conversely, regional locations had one key advantage: “The relationships are stronger in the country” (P2), which made it easier to integrate with local services: “They’re really well placed to set up their committees and find their partners” (P6). Metropolitan sites found it easier to contract suitable community partners and to source and train staff. Their challenges concerned meeting the needs of diverse families, and negotiating with larger providers or “power brokers”. Irrespective of location, participants expressed the view that implementation effectiveness came down to an individual community and whether they had “all the right ingredients”:

“At first we thought there might be a difference between…remote…and city providers. But that hasn’t really played out…some reasonably remote providers have just smashed this right from the start.” (P11)

Population diversity

Participants described the additional time and effort required when working with populations such as migrant, refugee or Aboriginal and Torres Strait Islander families. Cultural competency training was described as highly successful. Challenges included a lack of EBPs for these populations, difficulties reaching, engaging and building trust, and limited evaluation due to high attrition and low attendance. One participant discussed the benefits of working with Aboriginal Community Controlled Organizations although the evidence-based requirement created “tension”: “Working from a Community Controlled perspective it’s ’what does the community want? Do not tell us we’ve got to deliver this program. We’ll talk to the community and find out what they need and then we’ll do that’” (P9).

State factors

Participants identified three main factors at the state level that supported policy implementation: a collaborative approach; supportive Grant Agreement Managers; and a culture of evidence-based practice across the child and family sector. Conversely, states that struggled had sites that worked in isolation or who “…had their own way of doing things for a long time” (P13). Grant Agreement Managers played important roles supporting implementation, advising on capacity building and acting as the communication link between providers and policymakers. Participants noted this was a steep learning curve: “Evaluation and program implementation type work isn’t necessarily their expertize…[they] come at this from a range of different perspectives” (P12).

Service duplication

One unanticipated challenge was the broader move to EBPs across the family and child sector, resulting in duplication of programs in some areas. This was highlighted by participants from two states and by national participants. This duplication forced CfC partners to rapidly select and upskill in new programs:

“One of our regional services was doing [program], but then the state government rolled out funding for that, and so the regional area became kind of flooded with that. So then they had to…go back to the drawing board…with all these other organizations providing the same evidence-based programs.” (P17)

Program factors

Implementation was influenced by the types of the program included in the Guidebook. Participants described the range as limited, heavily skewed towards parenting programs, and “quite a few of them just aren’t…relevant or appropriate for some of the families that these services work with” (P10). Accessing Guidebook programs was described as a “labyrinth”. Some program owners were not aware they were in the Guidebook and were not interested in making it available. Others reportedly saw this as an opportunity to increase their prices. There was a general view that Guidebook programs and their associated training were expensive, few trainers were available for some programs, and accessibility was not equitable. For small organizations and those in regional and remote areas, time and costs were far greater, and the strain on resources more keenly felt, than for large metropolitan sites.

Workplace Factors Influencing Implementation

Workplace factors influencing sites’ ability to implement the policy comprised four sub-themes: workforce; organizational culture; formal and informal leadership; and it’s all about relationships.

Workforce

Workforce stability and competency were seen as integral to effective implementation. Lack of trained staff and high turnover were barriers, particularly for rural areas and sites where similar programs were being run by other providers. Finding and retaining suitable staff was a common issue, and the evidence-based policy itself contributed to this. High turnover was costly and at times led to program cessation:

“They have a huge turnover of staff, in remote areas — far more. And if they have the same turnover in the city, it’s not so hard to replace them. Remote you’ve really got a very small pool.” (P8)

“It can create competitiveness between providers…once they’ve trained a staff member to practice one of these programs they can then go and get a job anywhere else…” (P6)

Organizational culture

The way that organizational culture influenced implementation was described as a “willingness”, a “commitment”, a “strategic understanding and oversight”, and preparedness to “prioritize” evidence-based approaches. One participant explained how one site employed an additional staff member to support implementation: “It sends a message doesn’t it? We’re taking this policy change seriously. We are investing in this policy change and we are doing the utmost that we can to support [it]” (P4). Many organizations were “already on the journey” of working with evidence and measuring outcomes. Those who were “doing it anyway” and “hit the ground running” tended to be larger organizations with either in-house research and evaluation arms or strong partnerships with universities. Participants observed that individual workers who wouldn’t “buy-in to the process”, combined with the absence of a supportive organizational culture, made implementation very challenging.

Formal and informal leadership

Participants reported supportive organizational leadership was crucial to implementation, with considerable variability across leaders and site committees. One participant said: “I’ve actually heard the committees being the real champion” (P15); while another noted that if organizational leaders do not embrace evidence-based approaches “it’s very hard to move in that direction” (P13). Sometimes implementation success came down to one individual showing informal leadership, who “just loved the idea and ran with it” (P15) and was willing to do the hard work.

It’s all about relationships

Constructive relationships were regarded as vitally important across all levels of CfC: “I don’t think there’s a science to it, I really don’t [short laugh]. I think it’s really just about relationships” (P9). There were many relationships to build and maintain, which took time and effort. Participants described how the relationship-building efforts by Grant Agreement Managers and AIFS broke down barriers and helped sites to implement the policy and meet the requirements. Some service providers needed a “light touch”, while others required “really intensive” support. All DSS participants spoke about the positive effect of AIFS working directly with service providers. It countered the resistance, and helped providers “turn a corner”:

“[Service providers] often commented on how much of a difference that makes, the…face-to-face support. It’s almost like the capacity building has to start with relationship building…so that you can trust the person that is giving you this capacity building support.” (P15)

Networking between CfC sites fostered a supportive environment, creating learning opportunities and in some cases a pooling of resources for efficiency. Transparency of decision-making and the ability to provide support and resources to Community Partners were also critical: “Where I’ve seen the Facilitating Partners directly support Community Partners, Community Partners have done exceptionally well. They should be doing that, but they don’t all do that” (P13).

Finally, there was considerable investment required by service providers to build relationships with families and communities. Participants emphasized the value of sites offering “soft” entry points to their services, particularly for vulnerable families who may be “service-wary”, and how good relationships would lead to long-term engagement because “…the community respected and recognized the great work” (P7).

Reflecting on CfC’s Approach to Policy Implementation

Interviews elicited a range of views about the implementation of the policy from the perspective of participants themselves, as well as their reflections on differing responses among service providers. Some saw the transition from a locally autonomous model to the 50% evidence-based requirement as a “hearts and minds battle” to help service providers see that the intention was not punitive but aimed to provide better programs that improved child and family outcomes. Others spoke about service providers who understood the benefits right from the start, experienced an easier implementation journey. When reviewing the interview findings, we observed three distinct types of sites and organizations working in CfC:

Type 1 were “Enthusiastic and Confident”. They may not have had the knowledge and skills when implementation began but were confident they could develop or access them. They had positive attitudes towards the policy and were willing to try because they could see the potential benefits for families. Our observations are that some of these sites possessed a shared confidence and enthusiasm at a state level, and thus approached implementation as a state collective. Other sites were working in relative isolation but had an enthusiastic “champion”, supportive management, made use of the support services available to them and strategically partnered with research and evaluation experts. Type 2 were “Pragmatic and Confident”. They lacked “enthusiasm” for the policy change but decided to get on with it. These organizations were used to working with government and accustomed to adapting to new requirements. While initially focussed on compliance, our interviews suggest that attitudes became more positive over time as benefits of the policy change were realized. Type 3 were “Resistant and Unconfident”. These sites were resistant to the policy, could see no benefits and were not confident of meeting the requirements given the resources available. All aspects of implementation seemed to be a struggle and some of this resistance appeared to be driven at a regional or state level. To further our understanding of these three site types and the implementation of EBPs in the context of CfC, we turn to two potentially relevant models: organizational readiness for change; and sustainment of EBPs in disadvantaged communities.

Organizational readiness for change

While mandating the use of EBPs does lead to implementation, the implementation journey itself can depend on the level of enthusiasm and confidence organizations have, and may impact implementation effectiveness. Applying Weiner’s theory of organizational readiness (Table 1) helps to further explain some of the similarities and differences in policy implementation described by participants (Weiner, 2009). Two types of sites were broadly described to us as being effective in their implementation: the “enthusiastic and confident” sites, and the “pragmatic and confident” sites. The group that struggled with implementation were the “resistant and unconfident” sites. Therefore, the research team classified the first two groups’ implementation effectiveness as “high”, and the latter as “low”. We question, however, the quality of the implementation of the “pragmatic and confident” group given their initial prioritization on compliance over outcomes improvement.

Table 1 Three observed types of CfC sites and organizations against Weiner’s Theory of organizational readiness for change

Sustainment of EBPs in disadvantaged communities

Implementation of the EBP policy mandate in CfC sought to provide greater certainty that the initiative would meet its aim of improving outcomes for children, families and communities in 52 disadvantaged locations across Australia. Critical to its success was the sustained implementation of chosen programs. As proposed by Hodge and Turner’s framework (2016), sustained implementation of EBPs in disadvantaged communities can be guided by 18 factors encompassing program characteristics, workplace capacity, and process and interaction factors. In our study, government-level personnel described a variety of contextual, workforce and cultural factors that facilitated success or created barriers to policy implementation. We compared our findings to the above framework and found a high degree of alignment (Table 2). However, one factor (supervision and peer support) was not apparent in our findings and we identified three additional factors that were important in the CfC context: geography; service duplication; and refreshment and renewal. Two themes from our study were excluded from the analysis (the value of the Program Assessment Pathway and the highly valued “other 50%”) as they are unique to the CfC model.

Table 2 Agreement between study findings and Hodge and Turner’s sustainment factors

Geography

Geography intersected with many of the sustainment factors identified by Hodge and Turner, however, the frequency with which it arose in our data merits a separate category. The influence of geography on implementation may be most apparent in countries like Australia that are characterized by a high population concentration in a very small part of a large landmass. The distance of some sites from their support bases, seasonal variations, and the high rates of Indigenous families in remote locations made it difficult to find programs that were a good “fit”. Competition for staff made workforce mobility and turnover a significant sustainment issue and rural and remote sites bore substantially higher costs for staff training and program delivery.

Service duplication

Disadvantaged communities are typically underserved, however, government personnel indicated that some areas became saturated with EBPs funded by an array of local, state and federal government schemes. In some communities, this led to a duplication of services, associated loss of demand, and a need to rapidly introduce an alternate program in order to continue to meet the policy requirement. This is a function of poor sector-wide coordination and is more likely to be a feature of high-income countries and those with multiple levels of government, such as Australia.

Refreshment and renewal

Our participants highlighted the need for service providers to have a thorough understanding of their communities, they expressed concern about stagnation once the evidence-based requirement was met, and that the sustained use of approved programs could result in complacency and limited innovation. Communities are dynamic and need assessments and service mapping should occur regularly to ensure EBPs are refreshed to meet current and emerging needs.

Discussion

This study examined the “real-world” implementation of an EBP policy as understood from the unique perspective of the government personnel overseeing and supporting its implementation. These personnel had a detailed and nuanced understanding of the challenges encountered and the factors that supported successful implementation, recognizing the significant individual, organizational, state and national effort required, and acknowledging that some sites had unique locational, population, and workforce hurdles. There was a clear recognition of the accrued benefit from implementing the policy, despite some residual skepticism. To our knowledge, only two studies have specifically examined the implementation of EBPs in place-based initiatives (Compass Evaluation and Research, 2015; Goff et al., 2013). Consistent with these studies, we found the number of people and programs involved in implementation along with their geographic dispersion made implementation challenging, especially around monitoring and maintaining program fidelity.

Consistent with previous studies examining the scaling up of EBPs in real-world settings, our study found that supportive leadership, constructive working relationships at all levels of the initiative, skilled and capable staff, and high-quality evaluation were important drivers of implementation effectiveness (Boaz et al., 2008; Bumbarger & Perkins, 2008; Chamberlain, 2017; Fagan et al., 2019; Furlong & McGilloway, 2015; Prinz et al., 2009, 2016). Notably, these were not distributed evenly across the CfC initiative. There was evidence of enthusiastic, knowledgeable and skilled leadership, but the distribution was patchy at the state and community level; relationships between and within sites (i.e., between facilitating partners and community partners) was organization dependent; and the presence of skilled staff depended on a range of contextual and workforce factors. We found evidence of sites implementing the policy effectively because their organization already had a commitment to using evidence in practice. Others were successful because they employed implementation drivers (e.g., enthusiastic champions, supportive management, inter-agency cooperation) to compensate for a knowledge and skill deficit (Fixsen et al., 2016; Furlong & McGilloway, 2015). We also found evidence of low implementation effectiveness due to resistance and poor use of available resources and supports. These are complex factors to overcome in a large multisite, geographically dispersed initiative. Lessons learned from other studies suggest greater synergy between policy, administrative processes to support implementation (e.g., state-sponsored training and supervision), and fiscal incentivisation (Chamberlain, 2017), and a greater focus on organizational readiness (Weiner, 2009) with careful attention to organizations or geographical “pockets” that are “resistant and unconfident”, is needed. Assessing the organizational implementation climate using validated tools could be used to help identify such pockets of resistance from the outset (Ehrhart et al., 2014). Greater investment in capacity building tailored to local needs may help to overcome some of the initial resistance and skill deficit, and support more effective implementation.

One previous study found that the right mix of programs contributes to effective implementation (Prinz et al., 2009), while another found that the “menu” of programs available via a policy mandate was not always a good fit for local conditions (Weiss et al., 2008). Our study made similar findings. Programs can and should be adapted to suit such local conditions where feasible, and service providers require technical support to build the local capacity to do this (Moore et al., 2013). One study found training parents as co-facilitators can help build this local capacity (Furlong & McGilloway, 2015). Sometimes local programs that have a sound theoretical underpinning and promising albeit less “robust” evaluation findings may still be regarded by communities as a better fit. A sound understanding of each community is needed to determine this, accounting for local needs and the existing availability of programs in the community. Developing home-grown programs into “Promising programs” expands the available evidence base`ge of family services by the USA Federal Government via legislative and funding mechanisms, and we note a broad range of programs have been assessed and available via a central repository (U.S. Department of Health and Human Services, 2018, 2021). We also note that all programs are required to be evaluated in context, and we recommend CfC and like initiatives introduce a similar evaluative requirement.

Our study applied Hodge and Turner’s framework (2016) to our findings to examine the presence or absence of various factors that support or inhibit the sustained implementation of EBPs. We found a high degree of alignment, with three additional factors identified. We note that parental readiness and parental retention have also been identified as important factors (Furlong & McGilloway, 2015). We encourage funders and policy administrators of complex community initiatives for disadvantaged families to consider Hodge and Turner’s model and additional identified factors, to ensure EBPs remain embedded within communities and continue to meet the needs of the families they serve.

Strengths and Limitations

Our study findings confirm results from internally commissioned research (ACIL Allen Consulting, 2016; Robinson, 2017), but extends the focus to examine factors influencing the effective implementation and apply relevant frameworks and theories. A notable strength of this study is that it gives voice to the policy, operational and funding managers responsible for overseeing and supporting a new EBP policy imposed on a place-based initiative. This group are rarely the focus of such research but have a wealth of understanding across the variations in implementation across the sector. Our participants were well placed to reflect on the varying experiences of CfC sites: those in the national office worked with all 52 sites, those in the DSS state offices had worked with 70% of sites, and AIFS personnel had worked with around half the sites. There were no participants from the DSS personnel for one Australian state and it is not known whether their perspectives would be different. We interviewed everyone who accepted our invitation to participate, and those who did not respond may have had different experiences. Nonetheless, it is our view that rich and deep evidence has been generated due to the spread of locations, the mix of national and state roles from which participants were sourced resulting in exposure to many and varied sites, along with differing years of experience. Due to the small number of participants who work on CfC at the national level, and the risk that some sub-themes and supporting quotes may be attributable to individuals, we were unable to report the results by individual organizations or by “national” and “state” participants.

Conclusion

This study has illustrated the complex processes involved in the implementation of an EBP policy change. Levels of understanding of evidence-based practice, the rationale for change, and organizational readiness for change all vary, as do the contextual and workplace factors that impact program sustainability. In an initiative such as CfC, these complexities are multiplied as they may be present or absent at the state, site or local organizational level. Our data suggest that successful implementation may be enhanced by initial and ongoing education about the purpose and benefits of EBPs, and the early provision of extra resources and supports to organizations with a low level of organizational readiness for implementation. The research team is undertaking further work directly with community-level service providers to explore their experiences of the policy. What remains uncertain, given that all sites in all groups eventually implemented the policy, is whether different pathways to implementation were associated with different outcomes for children, families and communities. Moreover, there is variation in how well the policy has been implemented and it cannot necessarily be assumed that the implementation of a 50% EBP requirement will bring about the expected improvement in outcomes for children. Evaluation of the impact of this policy, and indeed similar policies elsewhere, for children and their families are therefore necessary.