Keywords

Promoting children’s mental health in education environments has many advantages. In the United States, preschool education and the care of children are not organized or consistent across jurisdictions or income levels. Consequently, the first time society pays attention to the development of children in an organized way is when they enter kindergarten or grade one. Therefore, the investment in promoting children’s mental health, which is critical to child development and to society, can be universally supported by the educational system.

The goal of this chapter is to highlight approaches to strengthening educational systems for the promotion of mental health from implementation and scaling research and systems science perspectives. We introduce theoretical and practical frameworks that incorporate both perspectives and deduce strategies of creating enabling contexts for promoting children’s mental health in educational environments.

Define Mental Health

The World Health Organization defines mental health as: “A state of well-being in which the individual realizes his or her own abilities, can cope with the normal stresses of life, can work productively and fruitfully, and is able to make a contribution to his or her community” (WHO 2004). Just as physical health is more than the absence of disease, mental health is more than the absence of mental illness.

Everyone needs the opportunity to learn and practice skills to manage life and engage with others in the world. Skills to manage stress, find balance and focus, and engage socially are critical components that should be cultivated throughout the lifespan in both formal and informal settings. Skills and experiences that help people feel valuable and engaged in their family, community, and economy are critical to society .

Population Mental Health

In this chapter, mental health promotion is viewed from a population point of view. That is, all children from kindergarten through age 18 are included in the population of interest. In implementation terms, this presents a major scaling challenge. Fixsen, Blase, and Fixsen (2017) draw attention to the numerator and denominator when assessing population impact. The denominator for scaling is defined by the specific population of concern. For school-based population mental health, the denominator in the U.S. is nearly 60 million school-age children and youth and their families. The numerator is defined by the number of individuals in the population who experience designated promotion or intervention methods. Recognizing that innovations do not produce social impact unless they are used as intended in practice (McIntosh, Mercer, Nese, & Ghemraoui, 2016; Weare & Nind, 2011), the quality of interventions as they are delivered in practice is an important aspect of scaling (Tommeraas & Ogden, 2016).

Developing sufficient implementation capacity to produce and sustain high fidelity uses of designated mental health promotion methods is essential for scaling and for the mental health of all children and youth. Generating a high-quality numerator for population mental health in school settings in the U.S. will require change for over six million teachers and staff working in about 100,000 schools situated in nearly 15,000 districts located in 3147 geographic counties in 50 states and the District of Columbia. Any school-based efforts to promote children’s mental health must be done with the population and the quality of intervention as delivered in practice in mind.

Fundamental changes in interventions and systems must be considered if population mental health goals are to be realized in the coming decades. Current systems of care and school-based interventions have led to modest and often unsustained outcomes for children’s mental health (Bruns & Walker, 2010; Weare & Nind, 2011). Herrman and Jané-Llopis (2012, p. 16) conclude their review of the field by stating, “Experience is growing with the development of partnerships and implementation of interventions across welfare, education, health, urban and rural planning, business and other sectors in countries of all types.” Sugai, Freeman, Simonsen, La Salle, and Fixsen (2017, p. 62) illuminate current social challenges to positive school-based programs and conclude that:

Contemporary school and classroom challenges must be defined, verified, and discussed. However, emphasis must be shifted quickly from rumination to prevention. A multitiered system of prevention practices requires moment-to-moment, hour-to-hour, day-to-day, month-to-month, and year-to-year engagement. Practice selection and adoption are necessary but insufficient. Equal, if not more, attention must be directed toward systems or organizational supports (leadership, decision making, support continuum) that enable practice use to be effective, efficient, durable, and relevant. If intervention fidelity is high and sustained, preventing the development and occurrences of our contemporary challenges is thinkable and doable.

Implementation and Scaling Practice and Science

When fundamental change is considered, three factors must be accounted for simultaneously to improve population mental health. The three impact factors (referred to as the formula for success) are (Fixsen, Blase, Metz, & Van Dyke, 2015):

$$ {\displaystyle \begin{array}{l}\begin{array}{l}\mathrm{Effective}\kern0.5em \mathrm{innovation}\times \mathrm{effective}\ \mathrm{implementation}\\ {}\times \mathrm{enabling}\kern0.5em \mathrm{context}\end{array}\\ {}=\mathrm{Socially}\kern0.5em \mathrm{significant}\kern0.5em \mathrm{outcomes}\end{array}} $$

What are the implications for population mental health for children? Each factor in the formula is essential, and together they are necessary for achieving socially significant outcomes such as population mental health. At this stage in the movement toward population mental health, effective innovations have been identified, the science base for effective implementation methods is reaching a more refined level, and enabling system contexts are better understood. It should be noted that the product, socially significant outcomes, is limited by the lowest factor in the formula. For example, if implementation is not effective and has a value of zero, then the product (population mental health) will be zero. While real-life variables are not as precise as the factors in this formula, the three factors need to be present and strong to produce desired outcomes (Fixsen, Naoom, Blase, Friedman, & Wallace, 2005).

With the advent of the evidence-based practice movement, science-to-service gaps have been identified as a major obstacle to achieving intended socially significant outcomes (Perl, 2011). The lack of focus on implementation practice and science has been identified as a major contributor to the science-to-service gap with Kessler and Glasgow (2011) arguing for a moratorium on randomized control trials (RCTs) that produce more innovations that will not be used in practice. While the RCT resources are not likely to be redirected any time soon, the practice and science of implementation continues to progress led by those who are doing the work of implementation in service settings (e.g., Bond et al., 2001; Chamberlain, 2003; Fixsen, Blase, Timbers, & Wolf, 2001; Mowbray, Holter, Teague, & Bybee, 2003; Ogden, Forgatch, Askeland, Patterson, & Bullock, 2005; Schoenwald, Brown, & Henggeler, 2000).

Changing Systems on Purpose

A major consideration is how to initiate and manage fundamental system and practice change to promote mental health for the population of school-age children. The typical failure of system change efforts has been well documented in many fields for many decades (e.g., Chase, 1979; Coburn, 2003; Nord & Tucker, 1987; Nutt, 2002; Schofield, 2004; Van Dyke & Naoom, 2015; Vernez, Karam, Mariano, & DeMartini, 2006). For example, mediocre literacy scores for age 9 children have persisted since they were first systematically measured by the Institute for Education Sciences in the 1960s. Literacy scores have hovered around an average score of 215 on a 500-point scale despite decades of reforms, quick fixes, and evidence-based approaches to education (National Center for Education Statistics, 2013; National Commission on Excellence in Education, 1983). Massive national investments have successfully built an interstate highway system (McNichol, 2006) and taken astronauts to the moon and back (Dicht, 2009) but have failed to realize the vision of improved human services and education (Rossi & Wright, 1984; Watkins, 1995) in the last century or in the new millennium.

In human service systems, services cannot be shut down, reconfigured, re-skilled, and restarted in some new and hopefully more effective mode. The requirement to develop a more functional and effective system while continuing to meet human service demands using the existing system adds degrees of complexity not faced by road builders or rocket launchers. It is not OK to blow up an education-system-change rocket and then move on to a hopefully improved version. When people and public services are involved, every failed attempt has lasting impacts that make meaningful change that much more difficult (Rittel & Webber, 1973).

To prevent change leaders from being overwhelmed by systemic issues that need to be resolved, systems change is initiated in a transformation zone (Fixsen, Blase, & Van Dyke, 2012). A transformation zone is a portion of the entire system from the practice level to the policy level and includes all major levels within the system within a selected geographical region of the system (e.g., a regional education agency and the districts, schools, towns, and neighborhoods in that region). The portion is big enough to encounter nearly all the vertical and horizontal issues that likely will arise in system change and small enough to keep issues at a manageable level until the beginnings of the “new system” are established and functioning well. Doing system change work in a transformation zone has the advantages noted for “continuous delivery” (Humble & Farley, 2011) where enabling system components are developed and tested in real time allowing effective functions, roles, and structures to be established and errors to be quickly detected and corrected in daily practice.

The work in the transformation zone is accomplished by engaged staff and stakeholders at each level of the system. Engaged leaders, staff, and stakeholders help to ensure that any selected mental health promotion innovations are the right thing at the right time for the specific subpopulation in the transformation zone and help to assure the macro environment will enhance and not undermine the innovation and associated implementation supports. The goal is to establish a system with aligned and integrated resources that leverage high levels of mental health promotion activities and continual improvement in outcomes for students, families, and society. We will reference the work to be done in the transformation zone throughout this chapter.

Effective Innovations

For school-based mental health, effective innovations are described in chapters of this handbook and in reviews of the field (Weare & Nind, 2011) and will not be detailed here. Effective mental health promotion activities will be designed, selected, or adapted to work within the context to address often varied and complex realities and built to leverage local system strengths to drive meaningful change. From an implementation and scaling point of view, innovations and interventions need to be effective and usable in practice. Usable innovations in the Active Implementation Frameworks meet four criteria (Fixsen, Blase, Metz, & Van Dyke, 2013):

  1. 1.

    They are described clearly and specify inclusion and exclusion criteria for the intended beneficiaries.

  2. 2.

    The core components (“active ingredients”) are identified, and rationales are provided regarding their importance for achieving intended outcomes.

  3. 3.

    The core components are operationalized with practice profiles that specify what practitioners do and say when they are using the core components in practice (also known as the innovation configuration (Hall & George, 1978; Hall & Hord, 2011).

  4. 4.

    A measure of fidelity is available that assesses the presence and strength of the core components as they are used in practice, and the fidelity data are highly correlated with intended outcomes .

Selection and development of mental health promotion activities is a community affair (Kim, Gloppen, Rhew, Oesterle, & Hawkins, 2015) that begins with system mapping in communities in the transformation zone. For example, system mapping can be done by focus groups of individuals and families who understand what already is being done to support children’s mental health, the resources they are aware of, and where are they struggling. In general, system mapping methods seek to illuminate the five Rs (USAID, 2014): results (what does success look like, what is currently measured), roles (who has a role in affecting or is affected by change in those results, such as stakeholders), resources (what is available to work with to use and support the implementation of the innovation/change results), relationships (what are the most important relationships that could either support or undermine change; note that a relationship is the connections among individuals and groups – trust, influence, collaboration, funding, information flow, etc.), and rules (what are the formal and informal rules that govern how the system behaves).

Using system mapping methods, mental health promotion in one community should be expected to be different from other communities. Nevertheless, any approach to mental health promotion must be tested against the four usable innovation criteria. Innovations, interventions, and approaches that meet the usability criteria are more likely to be teachable, learnable, doable, and assessable in practice; an essential foundation for scaling and impacting whole populations in a community or a nation .

Effective Implementation

For mental health promotion, the Active Implementation Frameworks (e.g., Fixsen et al., 2005) provide an evidence-based approach to support the full and effective implementation of innovations on a socially significant scale. The Active Implementation Frameworks combine:

  1. 1.

    Usable innovations: operational descriptions of innovations that include a practical assessment of fidelity that is highly correlated with intended outcomes

  2. 2.

    Implementation teams: groups that are highly skilled in the use of the Active Implementation Frameworks and in affecting organization and system change

  3. 3.

    Implementation drivers: methods to assure the development of innovation-related competencies, organization changes, and engaged leadership that support high fidelity use of innovations in practice

  4. 4.

    Implementation stages: exploration (creating readiness), installation (amassing human and financial resources), and initial implementation (beginning to support the use of the innovation in practice) activities and outcomes that support eventual full implementation (at least 50% of the practitioners meet fidelity standards for using the innovation in practice) within organizations and systems

  5. 5.

    Improvement cycles: plan-do-study-act cycles and usability testing methods for purposeful problem-solving and continual improvement in methods and outcomes

  6. 6.

    Systemic change: practice-policy communication protocols to align, integrate, and leverage existing structures, roles, and functions so that the implementation supports for the innovation maximize intended outcomes at scale

The evidence and practice bases for the Active Implementation Frameworks have been documented (e.g. Blase, Fixsen, Naoom, & Wallace, 2005; Fixsen et al., 2005; Metz & Bartley, 2012). The Active Implementation Frameworks have been operationalized, so they are teachable, learnable, usable, and assessable in practice (for examples, see the Ai Hub http://implementation.fpg.unc.edu), and the frameworks have been and are being used proactively to establish implementation capacity and improve outcomes (Fixsen et al., 2013; Metz et al., 2014).

The essential first step in using the Active Implementation Frameworks is to establish a local Implementation Team . A team consists of three to five individuals who are experts in the use of the Active Implementation Frameworks. Initially, the Implementation Team members likely will convene the focus groups; do the system mapping; participate in selecting and developing mental health promotion innovations, interventions, and activities that meet the usable intervention criteria; use the implementation drivers as a guide to find or develop the expertise to develop competencies among local school-based and other practitioners; help schools and other organizations change to support the use of promotion activities; and assure appropriate and engage leadership in schools and the community. Scaling (as defined in this chapter) requires a high-quality numerator to reach the population of school-age children defined in the denominator. Scaling requires expanding implementation capacity in the form of expert Implementation Teams across communities, the state, and the nation. They are a necessary means to the desired socially significant outcomes .

Enabling Context

An enabling context is the third factor in the formula for success. In the Active Implementation Frameworks, the context refers to the system in which organizations provide services to people. For example, schools provide teaching, learning, and mental health promotion services to students in the context of district, state, and federal education systems. The goal is to assure that the structures, roles, and functions within a system are more enabling than hindering in their impact on the services provided and the degree to which socially significant outcomes can be achieved. Accordingly, in order for school-based mental health interventions to be successful, the micro-(individual), meso-(organizational), and macro-(systems) level of systems have to be taken into account (Fixsen, Schultes, & Blase, 2016).

There are three aspects to be considered with respect to an enabling context. The first reflects the extent to which the current context supports the desired outcome among the target population – how well does the current environment in a given community support children’s mental health? The systems mapping focus groups and community involvement leading to mental health promotion actions provide a list of possible ways in which the current system does and does not support children’s mental health. The gap between the current system and the system that is needed provides an indication of the amount of systemic change that is needed.

The second aspect of an enabling context is the support for Implementation Team formation and development. Enabling contexts purposefully support the use and expansion of effective implementation methods to assure the high fidelity use of effective innovations in practice on a population scale.

The third aspect of an enabling context reflects the extent to which the broader system’s reaction to the innovation supports it. School-based mental health innovations are implemented in a broader context with competing objectives (e.g., ensuring children’s mental health, access to healthy food, public safety, balancing the budget) and limited resources. Delays often exist between innovation and observable improvements in outcomes, making it hard to learn what works with so many things constantly changing. Given the interconnectedness of stakeholders in and outside school systems and the impact others can have on an innovation’s success, anticipating external reaction to the use of innovations positions is important. It is understood that mental health promotion activities will disturb the existing system and point to areas that need to change so that implementing organizations can execute innovation and implementation plans with high fidelity to maximize impact. With feedback from the practice level, policymakers and leaders can “change the structure of our systems, creating different decision rules and new strategies” to reduce the likelihood that the system inadvertently will undermine its investment in its mental health promotion goals (Sterman, 2006, p. 509). Such “policy resistance” within systems might be driven, for example, by side effects of implementing school-based mental health innovations within schools or outside the boundary of schools. An example of the former might be if a school-based mental health innovation disrupts social interaction with the targeted students, undermining attempts to bolster well-being. An example of the latter might be if community or state investment in children’s mental health services is decreased as decision-makers see services within schools duplicating their effort. To make a more enabling context in the first example, stakeholders might discuss strategies for providing school-based mental health intervention without disrupting more social interaction within the school day. In the second, the school-based mental health innovation should be developed in collaboration with community and state mental health systems and decision-makers, to ensure the programs are synergistic and their theory of change, together, is clearly communicated. Systems can be enabling or hindering in various ways (Fixsen et al., 2005, 2013, p. 59).

Developing an Enabling Context

Existing human service systems are legacy systems that are the product of “[d]ecades of quick fixes, functional enhancements, technology upgrades, and other maintenance activities [that] obscure application functionality to the point where no one can understand how a system functions” (Ulrich, 2002; p 41–42). Legacy systems represent a layered history of well-intentioned but fragmented changes. Legacy systems are a poor fit with methods for promoting mental health for 60 million students in 98,000 schools in the United States.

The development of expert Implementation Teams and the full and effective use of the Active Implementation Frameworks and innovations in practice disturb the status quo and create a degree of instability and uncertainty that are goads to action. Disturbing the status quo creates a chaotic context (Snowden & Boone, 2007) that demands rapid responses to issues as they arise. The executive leadership at each level of the system must be prepared for frequent (weekly, monthly) communication from the front line and be prepared to engage in constructive problem-solving with constituents within and outside the system. As roles, functions, and structures are strengthened and barriers are eliminated, coherence is created as system components and resources are aligned with clarified system goals and intended outcomes. The Practice-Policy Communication Cycle is the timely communication from the practice level to the executive leadership (policy) level to inform policymakers of the intended and unintended consequences of policies and guidelines (Fixsen et al., 2013). The “cycle” is completed as the executive leaders make changes that remove barriers and increase support for the full and effective use of innovations. The cycle continues as those changes are further evaluated for impact and improvement or are deemed functional enough to be embedded in policies and guidelines. In this way, legacy systems are changed in functional ways so that innovations are not crushed by the already established routines that sustain the status quo (Nord & Tucker, 1987).

As stated by (Sterman, 2006), “Deep change in mental models, or doubleloop learning, arises when evidence not only alters our decisions within the context of existing frames, but also feeds back to alter our mental models. As our mental models change, we change the structure of our systems, creating different decision rules and new strategies. The same information, interpreted by a different model, now yields a different decision.” (p. 509). In addition, “we must be able to cycle around the loops faster than changes in the real world render existing knowledge obsolete” (p. 509). Thus, an intended outcome of disturbing the system is to provide leaders with opportunities to redesign system roles, functions, and structures – in essence, develop a new system on purpose. With the Practice-Policy Communication Cycle in place and Implementation Teams functioning as sensors of alignment and misalignment at the practice level, the executive leaders have the ability to continually “monitor and question the context in which it is operating and to question the rules that underlie its own operation” (Morgan & Ramirez, 1983, p. 15) .

An Example from the Field

This chapter has provided an outline and brief description of the key elements of scaling school-based mental health innovations, interventions, and activities to promote mental health for all school-age children. Words on a page are linear. The work described in this chapter is not linear. It is complex and simultaneous with many iterations as obstacles are encountered and eventually overcome. There is no end to it, since life continues to change at a rapid rate. Thus, an example of usability testing and continual improvement will provide a realistic ending for this chapter.

An Example from the Field: PDSA/Usability Testing Methods for Developing and Integrating Effective Interventions, Effective Implementation, and Enabling Contexts to Produce Socially Significant Benefits on Purpose

An example of an approach to establishing usable interventions is provided below. Note how PDSA is used to develop simultaneously the innovation and the implementation supports for the innovation.

The process outlined below employed nine teachers over the course of 4 months. In a usability testing format, the Implementation Team worked intensively with three teachers at a time to maximize the learning and to quickly make use of learning in the work with the next group of three teachers. This provides more learning and improvement opportunities for the Implementation Team compared to one experience with nine teachers.

Iteration #1

Plan

The state legislature just passed a law mandating new standards for grade 3 literacy. The state education leaders asked faculty of the state university to summarize the research on early literacy instruction with an emphasis on instructional practices that might be useful for children and students from age 3 through grade 3. The summary specified the following two instruction practices found to be effective in the literature (e.g., Hattie, 2009):

  • Effective instructors encourage high levels of student engagement with education content.

  • Effective instructors provide frequent, prompt, and accurate feedback to students when they respond.

Do

Based on the summary, the Implementation Team contacted a nearby district. After some exploration stage work with principals and teachers, they secured the cooperation of 9 K-3 teachers and their principals. The teachers agreed to try to use the instruction methods, participate in training, allow two people to observe their classroom every day for 2 weeks, give students a weekly quiz related to literacy content taught that week, and participate in up to 1 h of de-briefing discussion during each week. In a meeting with the teachers and their principals, a schedule was developed so teachers 1–3 would participate during month 1, teachers 4–6 would begin to participate in month 2, and teachers 7–9 would begin to participate in month 3.

Just prior to month 1, the Implementation Team developed a 2-h training workshop to review and discuss the literature regarding the two key instruction practices, model the two key components, and provide opportunities for teachers 1–3 to practice the skills in a mock classroom. The Implementation Team debriefed with the teachers at the end of training to obtain their opinions of the training methods and content.

Prior to month 1, the Implementation Team drafted four fidelity items to assess the use of the two key instruction practices. During the behavior rehearsal section of training, one member of the Implementation Team used the items to observe teacher instruction in the mock classroom. The items were modified based on those observations. The scores for each item related to teacher instruction at the end of training were analyzed to see how training could be improved next time.

Immediately after training , the three teachers began using the instruction practices in their classrooms. Starting on the third day and every other day thereafter, a member of the Implementation Team observed each classroom for 2 h with two members of the team jointly observing one classroom. The team members used the Practice Profile outline to note instances of expected, developmental, and poor examples of instruction. At the end of week 1 and again at the end of week 2, two members of the Implementation Team did a teacher instruction fidelity assessment using the four items developed prior to training and modified during training. Each teacher provided the Implementation Team with the average scores for the weekly student quiz related to literacy content taught that week.

At the end of each week, two Implementation Team members met with the three teachers to discuss the instruction practices. Teachers provided their perspectives on what was easy or difficult for them to do in their interactions with students. Implementation Team members offered suggestions for using the instruction practices based on their observations of all three teachers. Implementation Team members began drafting a coaching service delivery plan based on teachers’ input.

Study

At the end of weeks 2 and 3, the Implementation Team met to consider the information being developed. The information and data being gained from the experience with the first three teachers was used to revise the innovation and improve implementation supports as noted in the Act section .

Act

Based on classroom observations and comments from teachers, the Implementation Team re-defined the key instruction components of the innovation. The Implementation Team expanded the component, “Instructors encouraging high levels of student engagement with education content” to include “provides explicit instruction” and “models instruction tasks.” The Implementation Team drafted a Practice Profile (including the new components) with detail based on the classroom observations. The draft of the Practice Profile was reviewed with the three teachers, and their ideas were included regarding how to define expected, developmental, and poor examples of use of each component of the innovation.

The Implementation Team compared notes on the fidelity assessments to see if they agreed or not on scoring each of the four items. Agreement was not good, the items were revised to be more specific, and the number of items was increased to include the new components being operationalized in the Practice Profile. A protocol for how a fidelity observer should enter the classroom and conduct the observation was drafted for use in subsequent fidelity observations. The fidelity scores and the scores for the weekly student quizzes were summarized. No discernable relationship between the two was apparent.

As noted above, the Implementation Team began studying training during and after the training session for teachers 1–3. In week 3 the team began work on how to improve training methods and how to include the new content in training for the next three teachers .

Iteration #2

Plan

The Implementation Team met with the principal and teachers to set the time for a 2-h training workshop for teachers 4–6. The Implementation Team discussed the work during month 1 and invited questions about the classroom observations and the de-brief times .

Do

In month 2, the Implementation Team provided the revised training to teachers 4–6. The training content was based on the expanded essential functions. The revised training methods were based on the experience and feedback from teachers 1–3.

The Implementation Team provided a 2-h training workshop to review and discuss the literature regarding the key instruction practices, model the key components, and provide opportunities for teachers 4–6 to practice the skills in a mock classroom. During training, practice continued until the teachers felt competent and confident. The Implementation Team debriefed with the teachers at the end of training to obtain their opinions of the training methods and content.

During the behavior rehearsal section of training, one member of the Implementation Team used the revised fidelity items to observe teacher instruction in the mock classroom. The fidelity items were modified further based on those observations.

To collect pre-post training data, a version of the behavior rehearsal (used in training) was conducted individually for each teacher just prior to training. The teacher’s behavior was scored using the fidelity criteria. The scores for each fidelity item prior to training and during the last behavior rehearsal at the end of training were analyzed to see the extent to which teachers improved instruction skills. The data provided direction on how training could be improved next time.

Immediately after training, teachers 4–6 began using the instruction practices in their classrooms. Starting on the third day and every other day thereafter, a member of the Implementation Team observed each classroom for 2 h with two members of the Team jointly observing one classroom. For teachers 1–3 one observation per week was conducted. During the observations, the team members used the Practice Profile outline to note instances of expected, developmental, and poor examples of instruction.

Two members of the Implementation Team did a fidelity assessment. The new fidelity assessment was used for assessments of teachers 1–6 each week to gain more experience with the items and to continue to develop the observation protocol. Each teacher provided the Implementation Team with the average scores for the weekly student quiz related to literacy content taught that week.

At the end of each week, two Implementation Team members met with the six teachers to discuss the instruction practices. Teachers provided their perspectives on what was easy or difficult for them to do. Implementation Team members offered suggestions for using the instruction practices based on their observations of all six teachers. Implementation Team members revised the coaching service delivery plan based on teachers’ input.

Study

The Implementation Team now has 2 months of information from teachers 1–3 and 1 month of information from teachers 4–6. In month 2, teachers 1–3 were gaining experience and using the innovation with confidence in their interactions with students. The Implementation Team began seeing more nuanced versions of the four key components of the innovation.

The pre-post training data were summarized to see where training produced more or less improvement in teachers learning the instruction skills. Those data were compared to the ongoing fidelity assessments to see if the post-training scores for teachers predicted later fidelity scores.

The fidelity scores for the six teachers and the scores for the weekly student quizzes were summarized. A pattern emerged indicating a possible relationship between higher fidelity scores and better scores on student quizzes .

Act

Based on observations and teacher comments, the Implementation Tem again re-defined the key instruction components of the innovation. The Implementation Team expanded the component, “Effective instructors provide frequent, prompt, and accurate feedback to students when they respond” to include “corrects errors by modeling a correct response” and “limits corrective feedback to the task at hand.” These new components were included in the draft Practice Profile. The draft of the Practice Profile was reviewed with the six teachers, and their ideas were included regarding how to define expected, developmental, and poor examples of use of each component of the innovation.

The Implementation Team compared notes on the fidelity assessments to see if they agreed or not on scoring each of the items. The items were revised to be more specific, and the number of items was increased to include the new components being operationalized in the Practice Profile. The protocol for how a fidelity observer should enter the classroom and conduct the observation was revised based on the experiences with all six teachers.

The pre-post training data summary made it clear that trainers were more effective when teaching the instruction components related to delivering information to students. However, the trainers were producing mixed outcomes when teaching instruction components related to providing feedback to students after they responded. The Implementation Team developed new behavior rehearsal scenarios to provide more training on those skills .

Iteration #3

Plan

The Implementation Team met with the principal and teachers to set the time for a 2-h training workshop for teachers 7–9. The Implementation Team discussed the work during months 1 and 2 and invited questions about the classroom observations and the de-brief times .

Do

In month 3, the Implementation Team provided the revised training to teachers 7–9. The training content was based on the expanded essential functions. The revised training methods were based on the experience and feedback from teachers 1–6. The Implementation Team debriefed with the teachers at the end of training to obtain their opinions of the training methods and content.

During the behavior rehearsal section of training, one member of the Implementation Team used the revised fidelity items to observe teacher instruction in the mock classroom. The fidelity items were modified further based on those observations.

Pre-post training data were collected by using a version of the behavior rehearsal (used in training) individually for each teacher just prior to training. The teacher’s behavior was scored using the revised fidelity criteria. The scores for each fidelity item prior to training and during the last behavior rehearsal at the end of training were analyzed to see the extent to which teachers improved instruction skills. The data provided direction on how training could be improved next time .

Immediately after training, teachers 7–9 began using the instruction practices in their classrooms. Starting on the third day and every other day thereafter, a member of the Implementation Team observed each classroom for 2 h with two members of the team jointly observing one classroom. For teachers 1–6 one observation per week was conducted. During the observations, the Team members used the Practice Profile outline to note instances of expected, developmental, and poor examples of instruction.

For teachers 1–9, at the end of week 1 and again at the end of week 2, two members of the Implementation Team did a fidelity assessment. The revised fidelity assessment was used for assessments of teachers 1–9 each week to gain more experience with the items and to continue to develop the observation protocol .

At the end of each week, two Implementation Team members met with the nine teachers to discuss the instruction practices. Teachers provided their perspectives on what was easy or difficult for them to do. Implementation Team members offered suggestions for using the instruction practices based on their observations of all six teachers. Implementation Team members revised the coaching service delivery plan based on teachers’ input .

Study

The Implementation Team now has 3 months of information from teachers 1–3, 2 months of information from teachers 4–6, and 1 month of information from teachers 7–9. With daily use of the new instruction methods in the classroom, teachers 1–6 were using the innovation with confidence in their interactions with students. As each teacher “made the new skills her own,” the Implementation Team began seeing nuanced versions of the key components of the innovation .

Fidelity scores for teachers 1–3 and 4–6 seemed to be improving from the first week after training to month 3. The continued revision and expansion of the fidelity items made these data difficult to interpret, but the impression from observations and teacher reports seemed to confirm the fidelity information. The fidelity scores and the scores for the weekly student quizzes were summarized. Analysis of month 3 data for all nine teachers resulted in a positive correlation of 0.50 between fidelity and student quiz outcomes.

For two teachers in the 4–6 month group, fidelity scores were good, and their student outcomes were outstanding! The Implementation Team and teachers met to review the classroom observations and to engage the teachers in discussion of their instruction practices. It turned out these two teachers had been mentored by the same master teacher. During their induction into teaching, they had been taught to stand by the door and greet each student by name as he/she entered the classroom at the start of the school day and again after lunch period (Embry & Biglan, 2008). They felt this “primed the pump” and helped with student engagement .

The pre-post training data were summarized to see where training produced more or less improvement in teachers learning the instruction skills. Those data were compared to the ongoing fidelity assessments to see if the post-training scores for teachers predicted later fidelity scores .

Act

Based on observations, the Implementation Team again re-defined the key instruction components of the innovation. The Implementation Team expanded the key components to include greeting each student by name at the beginning of the school day. This new component was included in the draft Practice Profile. The draft of the Practice Profile was reviewed with the six teachers, and their ideas were included regarding how to define expected, developmental, and poor examples of use of each component of the innovation.

The Implementation Team compared notes on the fidelity assessments to see if they agreed or not on scoring each of the items. The items were revised to be more specific, and the number of items was increased to include the new component being operationalized in the Practice Profile. The protocol for how a fidelity observer should enter the classroom and conduct the observation was revised based on the experiences with all nine teachers .

The pre-post training data summary showed that trainers produced better outcomes when teaching instruction components related to providing feedback to students after they responded. However, there was need for further improvement. The Implementation Team decided to revise how they were giving feedback to teachers during training (e.g., focus comments on the positive behavior; model expected behavior prior to asking the teacher to practice again) during the behavior rehearsal scenarios .

Cycle

After 4 months, the Implementation Team was refining the fine points of the Practice Profile, assessing pre-post training knowledge and skills of teachers participating in training, using a good set of items to assess instruction practices in the classroom, and collecting information to correlate fidelity scores with student quiz scores. The innovation still needed improvement, but met the basic criteria for a usable intervention .