Keywords

Home visiting as an approach to deliver services to at-risk young children and families, particularly mothers, has grown in visibility and acceptance. Reviews of research on home visiting have found evidence supporting home visiting (Kahn & Moore, 2008; Nievar, Van Egeren, & Pollard, 2010; Sweet & Applebaum, 2004). Home visiting has been endorsed by groups such as the PEW Charitable Trusts (2010) and the Coalition for Evidence-Based Policy (2009). Support for home visiting can also be found in the business community (Bartik, 2011; Institute for a Competitive Workforce, 2010; ReadyNation, n.d.; Rolnick & Grunewald, 2003).

Home visiting has long been used as an approach to provide services to young children and their families (Roberts, Wasik, Casto, & Ramey, 1991). The use of government funds to support home visiting varies across countries (Nievar et al., 2010). In the USA, federal funds have supported home visiting through programs such as Early Head Start, services for children with disabilities (through Part C of the Individuals with Disabilities Education Act, PL 101–476), and programs through health departments for newborn children. However, until recently, there has not been specific federal funding for home visiting to at-risk families; this has recently changed. In 2010, the US Congress established the Maternal, Infant, and Early Childhood Home Visiting (MIECHV) program administered by the Health Resources and Services Administration (HRSA) and the Administration for Children and Families (ACF) of the US Department of Health and Human Services. The MIECHV funds grants to states to help at-risk families voluntarily receive home visits from qualified staff to improve maternal and child health, child development, school readiness, economic self-sufficiency, and prevent child abuse. Eligible families are defined in the law as families who reside in at-risk communities. This program opens the door to home visiting for families previously unable to receive these services.

The MIECHV program comes with some restrictions. With some limited exceptions, the programs implemented must be evidence-based and programs that meet evidence-based criteria have been identified, along with a process for programs to meet these criteria as new evidence becomes available. The MIECHV program has also identified outcome benchmarks that all programs in all states must meet. These benchmarks are in the following areas :

  • Improved maternal and newborn health

  • Prevention of child injuries, child abuse , neglect, or maltreatment, and reduction of emergency department visits

  • Improvement in school readiness and achievement

  • Reduction in crime or domestic violence

  • Improvements in family economic self-sufficiency

  • Improvements in the coordination and referrals for other community resources and supports

These benchmarks represent worthy areas in which to achieve outcomes.

Underlying this positive legislation, a tension exists between research and policy. Although meta-analyses of the research have identified positive outcomes from home visiting, the magnitude of the outcomes is small to moderate, and the same program when implemented in different communities can yield different results (Astuto & Allen, 2009; Azzi-Lessing, 2011; Daro, 2006; Nievar et al., 2010). States are working to expand existing home visiting programs while some in the research community want to see the development of more focused programs that better match target families with interventions for specific outcomes (Azzi-Lessing, 2011; Daro, 2006; Nievar et al., 2010). These two directions for moving the field forward, state expansion and research specificity, are not mutually exclusive. However, those expanding home visiting in states need to be aware of the limitations identified by research, whereas the researchers need to be sensitive to the political climate to not derail the positive gains made in providing more services to at-risk families .

This chapter focuses on this tension between policy, research and practice, and the innovations needed when implementing programs to reduce this tension primarily by examining how existing programs that work with at-risk families can incorporate innovative aspects as they expand. The questions addressed include:

  1. 1.

    What is the evidence for home visiting?

  2. 2.

    How does the evidence affect practice in the USA?

  3. 3.

    What is the role of innovation within evidence-based home visiting?

  4. 4.

    How can innovation be supported?

The goal of this chapter is to provide solutions that can ensure that our programs meet required outcomes, for most families, most of the time.

What Is the Evidence for Home Visiting?

Evidence-Based Practice

The words “evidence-based practice” are used in many fields these days, from medicine to psychology, to education, and to almost every field that uses research in some way to make decisions about what to do with people whether this be in some treatment, intervention, or classroom. Most people have some familiarity with evidence-based practice, but in my opinion, most people are not really clear on what this means. Experts alternatively talk about evidence-based practice, evidence-based programs , research-based activities, recommended practices, and the list goes on. However, these terms are not interchangeable and their misunderstanding and misuse comes from, in many cases, the experts themselves.

The basic principles behind evidence-based practice, first, are that decisions about what we do as practitioners are based on research studies, typically quantitative research, and second, that the research has been rated along some quality criteria. The first principle indicates the need for more than one study. Science is the accumulation of research on a given topic. One study with a positive finding is good, but if this study is the only one of ten studies that has this finding, then we must ask whether this finding is true or just a random finding. Evidence accumulates from many studies consistently finding the same outcomes.

The second principle is harder to understand for those who are not researchers. This principle is: Not all research is equal. Research studies vary in quality, where quality is in the design of the study. Some research is better able to reduce factors, referred to as threats, that limit our ability to say that A caused B (causality). Research that was conducted by a famous researcher or that was printed in a prestigious research journal does not make it high-quality research (although it may be). All research is designed to help us answer questions and increase knowledge. Research quality, however, comes from factors such as how subjects for the study were identified and assigned to groups, how well we know and document what happened to subjects during the intervention, how outcomes were measured, and how we take care of factors that may provide alternative explanations for findings. Evidence-based practice requires that a group of experts has gone through a research study and rated the quality of the research; some studies will reach the desired criteria, others will not.

At a practical level, this means that one study with one positive finding is never enough to make a practice or program evidence based. Similarly, a handful of poor-quality studies that find the same outcome (although promising) do not make for an evidence-based practice. On the other hand, once a practice is deemed evidence based, one high-quality study that does not get the same outcome does not take away the designation of evidence-based practice. The message for us as consumers of research is that a designation of evidence-based practice must be based on many high-quality studies all of which agree that the intervention (program, curriculum, etc.) under examination caused the same outcome.

At one level this is very simple. Different professional organizations and federal agencies have been identifying criteria on which research can be judged to define practices as evidence-based practices. The complicated part is that these organizations and agencies do not all use the same criteria and/or do not look at the same outcomes. My recommendation is that practitioners need to refer to the agencies that fund their programs and the organizations to which they belong, for recommendations on evidence-based practice.Footnote 1

Evidence-Based Practice in Home Visiting

As part of the MIECHV program, the law requires that 75 % of the available funds must be used for home visiting programs with evidence of effectiveness based on rigorous evaluation research: They must be evidence-based programs . In preparation for the implementation of the MIECHV program in the USA, an interagency workgroup from the US Department of Health and Human Service contracted with Mathematica Policy Research to review the home visiting research literature and assess the evidence of effectiveness for home visiting program models for women (and pregnant women) with children from birth through age 5. This review, begun in 2009, was called the Home Visiting Evidence of Effectiveness (HomVEE) Footnote 2. Program models identified by HomVEE are the ones from which states must select programs.

HomVEE conducted a rigorous review process to identify relevant home visiting research. They focused on studies that examined outcomes in: (a) maternal health; (b) child health; (c) child development and school readiness; (d) reductions in child maltreatment; (e) reductions in juvenile delinquency, family violence, or crime; (f) positive parenting practices; (g) family economic factors; and (h) linkages and referrals. HomVEE used an evidence-based criteria developed by the US Department of Health and Human Service where identified studies for each program model had to meet one of these primary criteria: (a) at least one high- or moderate-quality impact study of the model finds favorable, statistically significant impacts in two or more identified outcome domains or (b) at least two high- or moderate-quality impact studies of the model using unique study samples find one or more favorable, statistically significant impacts in the same domain.Footnote 3

HomVEE published an executive summary of this process and findings in 2010 and then an updated executive summary in 2014 (Avellar et al., 2014; Paulsell, Avellar, Sama Martin, & Del Grosso, 2010). As this is an active, ongoing process, program models continue to be added. Seventeen home visiting program models were identified as evidenced based in the most recent summary (2014). Interested readers need to check the HomVEE website on a regular basis. The 17 program models identified as evidence-based practice are: Child First, Durham Connects/Family Connects, Early Head Start—Home Visiting (EHS), Early Intervention Program (EIP), Early Start (New Zealand), Family Check-Up, Family Spirit, Healthy Families America (HFA), Healthy Steps, Home Instruction of Parents of Preschool Youngsters (HIPPY), Maternal Early Childhood Sustained Home Visiting , Minding the Baby, Nurse Family Partnership (NFP), Oklahoma Community-Based Family Resource and Support (CBFRS), Parents as Teachers (PAT), Play and Learning Strategies (PALS) Infant, and SafeCare Augmented. Table 9.1 presents information on the outcome areas by program models for which evidence was established for those models as reported in the 2014 summary and updated in June 2015 from the HomVEE website. Program models may have had multiple positive outcomes in a single outcome area but these are not reported here (see Avellar et al., 2014, for more details).

Table 9.1 Domains with positive impacts as identified by the HomVEE review

The areas of greatest impact for the evidence-based models are in positive parenting practices (12 models) and child development and school readiness (11 models). Health outcomes were found in some models for children (eight models) and mothers (nine models). Six models found reductions in child maltreatment. The remaining outcomes were found in five or fewer programs. To be fair, not all programs measured all outcomes.

The results of the HomVEE review are positive. As of 2014, 17 program models were identified as evidence-based practice. Each of these program models has requirements for training people, for supervision , and for how the program is put into practice. These fidelity requirements were included in the process for identifying program models as evidence-based practice and are needed so that we can see that programs who use these models are actually implementing the models as designed.

From a practice perspective, however, concerns were also identified. No program model , except HFA, found outcomes in all MIECHV -required outcome areas. Remember that all outcomes are required as part of MIECHV. Some program models did not find lasting program effects after the program ended. For most program models, findings were not replicated in all studies reviewed or were not replicated for all positive outcome impacts. Some of the program models found findings that were unfavorable or ambiguous in some studies as compared with outcomes that were found favorable in other studies.

The findings from the HomVEE review provide practitioners with a place to begin. The review allows the selection of program models for implementation that find outcomes in desired areas using quality research designs. However, this review also makes it clear that none of these program models are effective all the time, for all of the required outcomes. It raises questions about what the field needs as it moves forward with implementing these program models. It raises questions about what we should expect from the program models we implement as we move forward. The reality is that we do not know which program model would work best for any particular family in any particular place, for any particular outcome. The question becomes “How do we plan our services knowing this reality?”

Innovation in Home Visiting

The HomVEE review highlights not only positive home visiting program modelsFootnote 4 and outcomes but also areas of concern. These concerns are amplified, for example, when the requirements for receiving funding (in the USA) require that programs demonstrate outcomes in multiple areas, outcome areas for which some program models have established no evidence or mixed evidence. Other concerns also exist. Many of these program models were developed for use in urban areas. Can these program models be successfully adapted to rural areas; would distance technology be effective? The program models have been conducted primarily with low-income Euro- and African American families. Will these program models be effective for new immigrants, for indigenous peoples (cf. Chaffin, Bard, Bigfoot, & Maher, 2012)? Most evidence-based practice models were developed with an earlier generation of parents. Will these program models work with young parents who expect more technologically based materials? There is an ongoing concern about the correct dosage of intervention (Roggman, Cook, Peterson, & Raikes, 2008) because the frequency of home visits seems to be a critical variable to success (Nievar et al., 2010). Beyond the frequency of visits, the length of time families’ stay in a program is a question. Research shows the people receive about half of the home visits expected according to program model designs (Paulsell, 2010; Riley, Brady, Goldberg, Jacobs, & Easterbrooks, 2008). These are only some of the many issues to be considered. Azzi-Lessing (2011) provides a detailed overview of many more critical issues for home visiting programs to consider. Given the funds required to implement the evidence-based home visiting program models as they are currently structured, the question may be: How can we innovate and still remain true to our evidence-based program model?

Before innovation can occur, programs need to have a strong “theory of change” or logic model (Raikes et al., 2014; Roggman, Boyce, & Innocenti, 2008). There are many resources available on the Internet that discusses how to develop a logic model.Footnote 5 At a basic level, a logic model includes program inputs, program outputs, and program outcomes. Figure 9.1 presents a very basic home visiting logic model. The program inputs are the program model fidelity components. This includes the people hired as practitioners, the training they receive, the supervision they receive, the curriculum used, the plan for the frequency and duration of visits, and other similar program fidelity concerns. Each evidence-based program model in the HomVEE review identified fidelity components and those would be included here .

Fig. 9.1
figure 1

Basic theory of change diagram

Program outputs have traditionally been aspects of service delivery. They include whether the home visits are occurring as regularly or for as long as they are intended. They also include information on whether the curriculum (if there is one) and/or the plan for working with the family are followed.

The program outcomes are those areas in which the program should have impacts.Most HomVEE program models demonstrated impacts in child development and school readiness, so these would be outcomes. Many program models had impacts on other outcomes and those would be included here. All areas where a program claims it should have outcomes would be included. It is important that each of the components—inputs, outputs, and outcomes—be measured. Only by measuring each of the components can a program have evidence that what they do leads to the outcomes they want. All of the evidence-based program models have a logic model but these may not address the MIECHV requirements and it is good practice for individual programs to develop/adapt the logic model in their program.

Figure 9.2 brings innovation into the logic model. This model shows an example of an additional layer of potential innovative components added to the basic logic model. These added innovations are not a comprehensive list, and not all innovations would happen at the same time; Fig. 9.2 is merely an illustrative example. Innovative inputs might include new types of training to impact new target outcomes. These inputs might include new ways of looking at families that could enhance existing process, such as identifying family factors to guide individualization, or new procedures to begin implementing, such as a continuous quality improvement (CQI) process.

Fig. 9.2
figure 2

Basic theory of change including process and innovation

Innovation at the output level may require that different aspects of services be examined to incorporate new strategies or deliver new content. Some recent advances in practice would suggest looking at the quality of what happened during the visit, the processes and practices (Design Options for Home Visiting Evaluation [DOHVE], 2012; Innocenti & Roggman, 2011; Paulsell, Boller, Hallgren, & Mraz Esposito, 2010), and this could take the form of new home visiting strategies to implement. The Home Visiting Rating Scale (HOVRS; Roggman, Cook, Jump Norman et al., 2008) would be an example of a tool that measures home visitor practices during a home visit. The DOHVE and Paulsell et al. (2010) papers identify other new tools to consider. New types of content, related to new outcomes, may need to be delivered during home visits. Figure 9.2 also shows outputs related to engagement in the home visit; the importance of engagement, at the parent and child levels, has been emphasized (Azzi-Lessing, 2011; Nievar et al., 2010; Paulsell, 2012).

The eight benchmark outcomes identified by MIECHV include outcomes that not all program models measured as a part of research underlying their evidence base. In program models for which these outcomes have not been tested in high-quality research, these new outcomes are innovative. For example, in the HomVEE review (Avellar et al., 2014), few of the evidence-based program models demonstrated impacts on reductions in family violence. Family violence would be an innovative outcome and, if included, would require a fresh look at the relation of innovative inputs and outputs to affect this outcome .

This approach to innovation is flexible and responsive to the need for change, whether it is to meet new benchmark outcomes or to be sure that the program is meeting outcomes for specific groups the program serves (e.g., families with toxic stress). Each program needs to use its current logic model and adapt it as innovation is needed. Each component needs to be measurable in a way that the program can use the information to respond and make changes to inputs and outputs, and those are then reflected in outcomes. The next section provides more information on what is needed not only to put innovation into practice but also to ensure that evidence-based programs continue to be driven by evidence.

From Innovation to Practice

The people who work in home visiting programs work hard. It is difficult work, and in too many places, people are not paid well for this work. The need to innovate cannot be considered a luxury, but a necessity, especially in an environment where having evidence is not only helpful but also required. Innovation requires extra time for staff and program resources. Infrastructure both for and within programs is needed. Four areas that require consideration to make implementation effective and innovation part of ongoing practice are: implementation science , data-informed practice, supervision and coaching, and CQI .

Implementation Science

Evidence-based practice has helped identify program models that have successful outcomes. Identifying program models is only the first step. The next step, which has been happening in the USA, is wide-scale implementation of the program models. Adopting an evidence-based program model and obtaining the requisite training required by the model developer does not ensure that the program will have the desired outcomes (Durlak & DuPre, 2008). An already established evidence-based program model that expands to a new city or county may not be effective or effective in the same way when implemented elsewhere (Azzi-Lessing, 2011; Paulsell et al., 2010).

Implementation of evidence-based program models is a practice in its own right. Fixsen, Naoom, Blasé, Frideman, and Wallace (2005) conducted a comprehensive review of the research literature on the implementation of programs from diverse fields of practice (including agriculture, business, child welfare, medicine, and others) and found that the implementation of evidence-based practice programs did not always go as intended. Just because a program has been proven effective does not mean that it can be adopted by others and successfully implemented. Fixsen and colleagues identified factors that lead to more successful implementation. This field of research has been called implementation science . Implementation science (or implementation research) is the scientific study of methods to promote the successful transition of programs from evidence-based practice to routine practice while maintaining the same outcomes. Implementation science examines the conditions that impact changes at the practice, organization, and system levels (Blasé et al., 2010).Footnote 6

Implementation science has identified six implementation drivers (see Metz, Blasé, & Bowie, 2007). These six drivers or components were identified in research where successful implementation of programs occurred. These are:

  1. 1.

    Staff recruitment and selection

  2. 2.

    Preservice or inservice training

  3. 3.

    Coaching, mentoring, and supervision

  4. 4.

    Internal management support

  5. 5.

    Systems-level partnerships

  6. 6.

    Staff and program evaluation

These drivers have recently been grouped into competency drivers (1–3 above), organizational drivers (4–6 above), and leadership drivers (technical and adaptive). Fixsen (2014) provides evidence that when the implementation drivers are used effectively, evidence-based programs can be implemented with 80 % fidelity in 3 years. This compares to 17 years of implementation with only 14 % fidelity for evidence-based programs that do not use the implementation drivers.

All of the HomVEE -identified evidence-based practice programs have requirements for staff recruitment and for required training. Each of the program models has identified the minimum requirements for staff. Most program models also have required training and practice criteria for a program to be considered as having fidelity to the model. Most program models require supervision . All require some type of evaluation, and some come with data collection and reporting requirements. These components are implementation drivers and need to be included in the logic model for a new program just beginning to implement an evidence-based program model.

Although some of the HomVEE evidence-based program models have national offices which oversee the general fidelity of the program, these offices do not necessarily work as part of a system. Each individual program is not necessarily linked to other similar programs within a state or region. One of the potential strengths of the MIECHV system is that it is administered at the state level (in the USA), which immediately puts home visiting into a larger system. This state system can help support the implementation drivers that facilitate implementation of the models and benefit all programs. This not only includes the implementation drivers that may be available as part of the model but also the implementation drivers that need to be included as the program implements the model in a new setting. The state system can help each home visiting program consider the implementation drivers specific to their model. Ongoing training may be needed to build skills to promote outcomes that are now required but were not emphasized in research by the program model developers. For example, the competency drivers of coaching, mentoring, and supervision may be considered in the model but the programs may need additional technical assistance and resources to make these happen so that model fidelity is achieved. These drivers may be especially critical as home visiting programs innovate. The infrastructure needs to include ongoing data collection to support internal management. The state may assist programs to implement organizational drivers, especially when each state has its own data reporting requirements. For effective implementation, drivers need to occur as long as the program exists and they need to be regular activities. Strong implementation science practices support the innovation process. Some key implementation drivers the state can facilitate to establish model fidelity are discussed in more detail below.

Data Availability

An organizational driver that supports staff and program evaluation is having data available for regular review—data-informed practice. Information about the key components of the program’s logic model is of paramount importance. A system is needed to provide an efficient way for staff to enter information into a data management system as it is collected. This system can build on the data management system that program models may already use. Most of the evidence-based program models require some level of data collection; some already have extensive data collection systems. The data management system must be logical and easy to use for both the home visitor who enters the data from each visit and to the program managers who review the data.

The implementation of the MIECHV program is driving the development of data management systems in participating states. National consultants are providing services to develop these systems and some states are hiring local consultants to design systems for local use. One example is Utah, where in-state consultants developed a data management system. The development of this system began before MIECHV funding as part of another project examining the effect of home visiting programs and then evolved to add MIECHV data requirements. In their State Health Department, Utah has an Office of Home Visiting that has oversight for MIECHV. Like many states, Utah has a mix of evidence-based home visiting program models being implemented and must be responsive to the needs of each program. The data system developers worked collaboratively with the state and program staff to identify needed data components. The process next included multiple field tests to get feedback on the system and make changes as needed. When new MIECHV benchmark requirements were added, a similar process was completed. Program staff and the state built a data management system useful to all.

After the data system is in place, staff must be trained to regularly input the data and to regularly use the resulting information. The data entry process itself needs to be monitored as a goal until data entry becomes routine. Program managers use these data to evaluate how services are provided, how parents engage in home visits, and how outcome variables are affected. This information can be used to examine the effectiveness of the current logic model, as long as the logic model components have been identified, defined, and measured. If innovative components are added, these data will help programs understand the impacts of innovation. Supervisors must regularly access the data and use it to guide supervision. The process needs to be internally monitored to ensure it continues to be useful.

Developing a useful data management system takes funding and time for staff to learn and use. The Utah example demonstrates one benefit of being in a system such as MIECHV . This is a system driver for implementation.

Supervision and Coaching

Implementation research competency drivers identify professional development, including ongoing coaching and supervisory support, as critical to successful implementation. Our review of the current literature specific to professional development for home visitors finds few relevant articles on supervision , except for the outcome of improving retention of staff, an import outcome with both program fidelity and cost implications. Home visitors who were given supervision and consultation had lower levels of emotional exhaustion and burnout, two variables found to negatively impact fidelity and retention (Aarons, Fettes, Flores, & Sommerfeld, 2009; Aarons, Sommerfeld, Hecht, Silovsky, & Chaffin, 2009).

Much of the supervision literature for home visiting focuses on the practice of reflective supervision (Heller & Gilkerson, 2011). However, Innocenti and Roggman (2011) found limited research support for reflective supervision and suggested that there is a need to move beyond reflective supervision to developmental supervision, an approach which maintains reflective supervision while building practices suggested by recent reviews of the adult learning literature (Dunst & Trivette, 2009). This proposed approach to supervision includes supervision practices similar to coaching and incorporates the use of measurement tools to provide information on implemented home visiting quality to the supervisor and home visitor (see Design Options for Home Visiting Evaluation [DOHVE], 2012; Paulsell et al., 2010; Roggman, Cook, Jump Norman et al., 2008).

A recent meta-analysis of the adult learning literature has clearly identified that better outcomes result when the training process includes strategies that more actively involve the learner in using, processing, and evaluating the mastery of newly acquired skills (Dunst & Trivette, 2009). Coaching is an approach that makes use of these strategies. Research shows that coaching is an effective adult learning strategy to improve existing abilities, develop new skills, and gain a deeper understanding of practices (Hanft, Rush, & Shelden, 2004). Coaching supports the learner in identifying what works, what might need to be done differently, and what level of support is needed from the coach (Rush & Shelden, 2011). Coaching has been used successfully for working with parents who have a child with disabilities (Janssen, Riksen-Walraven, Dijk, & Ruijssenaars, 2010). Results from randomized controlled trial studies of professional development interventions, primarily in educational settings that included coaching, revealed small but significant effects on children’s learning, with somewhat larger effects on intervention practices (e.g., Bierman, Nix, Greenberg, Blair, & Domitrovich, 2008; Powell, Diamond, Burchinal, & Koehler, 2010; Wasik, Bond, & Hindman, 2006). These results suggest that coaching should provide a positive mechanism for changing practitioner behavior to align with evidence-based home visiting practices that lead to desired outcomes.

A point for consideration is the job descriptions of supervisor and coach. Supervisors typically have a power differential in respect to the practitioner in that supervisors not only provide supervision but also typically have the power to make decisions on job advancement and payment. This can be a disincentive to practitioners to discuss weaknesses in practice. Coaches, in contrast, are typically advanced peers who do not have a role in making position recommendations. Although it can be more expensive, dedicated coaches are recommended; this is a recommended practice although not a research-based recommendation.

Implementation science is clear that supervision and coaching are competency drivers for successful program replication. Although research on training for home visiting program staff is growing (Whitaker et al., 2012), research on best practices in coaching and supervision for home visiting need continued study.

Continuous Quality Improvement

CQI brings together the implementation drivers to ensure model fidelity and incorporate innovation. CQI has been described as the process of identifying, describing, and analyzing strengths and problems, and then testing, implementing, learning from, and revising solutions . CQI uses the data collected by the program, data that measures outcomes and implementation practices, to identify what the program is doing well and where it needs improvement. All program staff are involved in looking at the data to answer the question of where the program needs to improve and based upon the program logic model, what the program needs to change. The CQI process needs to occur in an atmosphere where the activities and performance of staff are discussed in a nonthreatening manner. The goal is to improve—not to blame—and this requires a network of good strengths-based relationships among program staff (similar to what we want to see between the home visitors and parents). Staff identify potential solutions and these are implemented quickly; the process is action oriented. As part of the process, staff members determine how to collect information on the solution they identify. CQI may require the collection of new data on inputs, process, outputs, or outcomes depending on the focus of the CQI process. The CQI process needs to be a part of regular meetings, at least monthly. At these meetings the program data are reviewed and immediate changes can be made to the process. This process is cyclical in that it does not stop until the initial concern is resolved and once resolved, the new solution becomes a potential area for more CQI. The goal is to develop a program that emphasizes quality as defined by meeting the program goals. Ammerman, Putnam, Margolis, and Van Ginkel (2009) provided a clear description on the practice of CQI and examples of its implementation in child abuse prevention programs.

The CQI process is logical and straightforward (Langley et al., 2009). The process begins by asking some basic questions about the program’s aims, measures, and ideas. The aims question is: What are we trying to accomplish? The measures question is: How will we know change is an improvement? The ideas question is: What change can we make that will result in that improvement? Once these questions are answered, the CQI process uses a plan-do-study-act (PDSA) cycle where all steps are written down. The “plan” is the decision on what needs to change. The “do” is implementing the changes identified. The “study” is looking at your measures to monitor and analyze the impact of your changes. The “act” is revising and standardizing the changes. The cycle is then repeated for the new “act.” Ammerman and colleagues (2009) provided information on this process from a human services perspective (see also Bickman & Nosser, 1999). Websites by groups such as the Casey Foundation, the Center for Institutional Effectiveness, and the Institute for Healthcare Improvement have information to help better understand the process .

Earlier, we discussed the example of innovation for a program with a strong evidence base for achieving child development outcomes being required to also focus on decreasing intimate partner violence, an area not supported by prior research on that program model. The CQI process could help drive this innovation. This program would set a goal of decreasing intimate partner violence. The program, as what it would “do,” decides to provide specialized training to staff on intimate partner violence. This program could measure that this training occurred, which staff members attended, and what staff learned about certain skills or techniques from the training. The program could then examine changes in their intimate partner violence outcome from before to after this training, the “study” part of the cycle. The program would then need to “act” on what it had learned.

Continuing with this example, imagine that the program finds that this training has no effect on the intimate partner violence outcome. The next strategy (plan) may be to have the home visitor provide specific information during a home visit to mothers on how to respond to intimate partner violence. This could be implemented for all program participants or only for mothers who respond in certain ways on a regularly used measure or who express concerns to the home visitor about intimate partner violence (do). The program would not only continue to measure the training the staff receive but also add onto that a measure that indicates whether the parents received the specific information. This measure could be as simple as the home visitor reporting that the specific information was provided to each parent. The program would continue to monitor changes in intimate partner violence outcomes (study). The information obtained would lead to decisions on what to do next (act). The program might continue this approach or try something else. The cycle continues until the program has achieved the desired outcome. Once the desired outcome has been met, the program will continue to monitor it, while identifying the next outcome for the CQI process. This CQI process can be a strong driver of innovation and quality.

Conclusions

Research has demonstrated the effectiveness of home visiting. The practice of home visiting has received support from groups that vary from those focused on societal outcomes to those focused on business outcomes. Governments in different countries have responded and are supporting home visiting. This support provides the opportunity to expand home visiting and potentially result in more families and children who have better outcomes. However, with this support also come added pressures. Evidence-based home visiting programs are being asked to expand and work with new groups of people or to focus on additional outcomes, people and outcomes on which the original program may not have been evaluated or may not have demonstrated effectiveness. Programs need to work to establish evidence for these new groups if home visiting is to continue receiving funding and support. Even in situations in which funding and support are secure, programs should collect data to monitor quality and ensure desired outcomes are being met.

This chapter provided information on the expansion of home visiting in the USA and the pressures that result from this expansion. These pressures require that programs become more comfortable with innovation and adopt practices that will help lead to successful innovation. These practices need to be supported, through funding and infrastructure development, by governments and other agencies who oversee the expansion of home visiting programs and by the programs themselves, if these programs are to continue to be effective.

This chapter has identified some of the practices that should help home visiting programs as they move forward with expansion and innovation. These practices only represent some of the steps that may be required. There may be other practices not yet anticipated or different practices needed for different systems. The goal of this chapter was to provide ideas for consideration. Just as home visiting programs need to report the data on outcomes, systems research is needed to determine what practices are necessary for successful expansion.

The final goal of these practices is that we gain a better understanding of which programs and program activities work best for which families and for which outcomes. Providing home visiting programs with data and the supports needed to use the data on a regular basis allows those in the field to better address the needs of those they serve. This process needs to occur in an environment that continues to support research within and across different home visiting programs. At the same time, support is needed for developing new programs and for programs that are establishing their evidence base. New programs bring with them innovations that help improve the entire home visiting field. These are some of the challenges that need to be met to continue to help those who can benefit from home visiting.