Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

The recognition that the implementation of changes in the professional beliefs and knowledge, behaviors, organizational conditions, and outcomes of people working in schools and school systems takes place over time is a fundamental precept in educational change theory, research, and practice. The aim of this chapter is to provide a concise overview of significant conceptual tools developed by education change theorists for describing, studying, and explaining that process as it plays out over time at different levels – e.g., individual, program, school, and school system. Given the volume of published research on educational change over the past 50 years, it is perhaps surprising that our understanding of the process dimensions of educational change remains limited to a few core concepts that once articulated have assumed a taken-for-granted status. This chapter revisits these concepts, highlights key areas of debate or lack of conceptual clarity, and suggests areas for further research regarding the processual nature of education change, particularly in terms of stage or phase theories of change over time. For each level of change considered, reference is made to key sources in the literature for the prevailing conceptual models of the temporal dimensions of the change process. While other publications might have been selected, these have been chosen because they are widely cited and applied in the literature on educational change, and because they draw attention to many of the key ideas and issues in considering change as a process that evolves through identifiable personal and organizational stages or phases over time.

Change as a Developmental Personal Process

Credit the developers of the Concerns-Based Adoption Model (CBAM) for their seminal conceptualization of change as a developmental process in attitudes and behaviors for individuals attempting to put new ideas and practices into use (Hall & Loucks, 1977, 1978; Hall, Loucks, Rutherford, & Newlove, 1975; Loucks & Hall, 1977; see Hall & Hord, 2006, for a recent comprehensive overview of CBAM and supporting research). The basic ideas are straightforward. One dimension of change is represented as a developmental sequence of “Stages of Concern” that reflect a person’s (e.g., a teacher) disposition or attitudes toward a change that he or she is attempting to put into practice (voluntarily or as an organizational mandate). A second dimension focuses on a developmental progression in a person’s behaviors as he or she prepares for, begins, masters, and refines the use of new professional practices, referred to as “Levels of Use.”

Through studies of experienced teachers implementing changes in curriculum and teaching (referred to as innovations), the CBAM developers identified and defined seven Stages of Concern. At Stage 0, Awareness, a teacher has little knowledge about or interest in the change. At Stage 1, Informational, the teacher is interested in learning more about the change and the implications of its implementation. Teacher concerns at Stage 2, Personal, reflect anxieties about the teacher’s ability to implement the change, the need for change, and the personal costs of getting involved. Stage 3, Management, concerns intensify as the teacher first begins to cope with the logistics and new behaviors associated with putting the change into practice. At Stage 4, Consequence, teacher concerns focus on the impact of the change on students in their classrooms and on ways of modifying the innovation or its use to improve its effects. Teacher interest in working with other teachers to jointly improve the benefits of implementing the change for students is manifested in Stage 5, Collaboration, concerns. At some point in the change process, teachers may develop Stage 6, Refocusing, concerns. These teachers think about making major modifications in the use of the innovation, or perhaps replacing it with something else. The intent of the developers of the CBAM framework was not simply to create a research-based framework for understanding teacher change, but also to create ways to assess teachers’ feelings and experience with innovative practices, and to use this information to provide interventions that would address their concerns.

The image of affective “stages” that a teacher (or anyone implementing a change in practice) progresses through over time is somewhat misleading. It is grounded in the notion (supported by research) that as teachers (both novice and experienced) become aware of, learn about, try out, and master the use of new teaching methods and programs their feelings about the change often evolve from a predominant focus on self (high Personal concerns), to task (high Informational and Management concerns), to impact (high Consequence and Collaboration concerns). Where things can get confusing, however, is if education researchers or practitioners misinterpret the CBAM framework as a necessary and lockstep evolution in the concerns of innovation users, rather than a possible progression dependent upon the influence of other factors at play in the implementation context. CBAM theory posits that the nature and intensity of individual concerns about the implementation of new ideas and practices across and within each stage will be higher or lower, depending not only on the person’s progress in mastering the change, but also on the organizational conditions (e.g., administrative and collegial support, fit with prior beliefs and practices) associated with the change, and the perceived impact or results of the change for those affected (teachers, students). Without effective professional development inputs during the time in which teachers are learning to use new teaching strategies and programs, for example, teachers may experience unresolved Personal and Management concerns that can lead to frustration, resistance, or even abandonment of the change. Furthermore, interventions that do effectively resolve early stage concerns do not necessarily stimulate more intense concerns at subsequent stages in the model. Researchers applying the CBAM have discovered, for example, that even with repeated use of a new practice and adequate professional assistance, teachers may incorporate new teaching methods and programs into routine patterns of use without necessarily shifting their concerns toward refinement of the innovation based on observed evidence of student impact. Research on teacher collegiality and professional community suggests that the shift into more intense Consequence or Collaboration stage concerns may be less a function of teachers’ individual mastery in the use of new programs and practices than of whether the organizational culture of the school in which they work emphasizes improvement in student learning through shared goals, teacher collaboration, and ongoing teacher learning activities (e.g., Anderson, 1997; Dufour, Eaker, & Dufour, 2005; Little, 1982; Rosenholtz, 1989).

A second element of potential confusion in applying this stage theory of teacher feelings about implementing new practices is that teachers are likely to experience and express concerns that link simultaneously to multiple “stages” in the model. It is the relative intensity of their concerns related to one or more stages that distinguish teacher attitudes toward a particular change they are involved with, not the mere presence or absence of concerns. For example, a teacher who is preparing or just starting to use some new teaching method might be genuinely wondering about the potential benefits of the innovation for student learning compared to current practices (Consequence concerns), while being predominantly concerned with figuring out how to integrate the use of that method into his/her daily lesson plans, and with attaining a basic level of comfort and competence in how he or she applies it in the classroom with students (Management concerns). In other words, at this point in their mastery of the use of the new method teachers are more preoccupied with the logistics and skill of doing it than with assessing and judging its effects on students and modifying it accordingly. It is not the case, however, that they do not care about student impact. In a metaphorical if not a real sense, it may be more appropriate to think of the different categories of concerns less as distinct stages than as notes in a musical chord that can be played in ways that give emphasis to different feelings depending on the teachers’ progress in context. The CBAM developers refer to change users’ concerns profile across the stages. A profile may reflect multiple peak concerns, not a single dominant focus on one stage. The theoretical and practical meaning of “stage” in this well-known model of the evolution of teachers’ dispositions toward the implementation of changes in practice would benefit from further research.

The second dimension of the CBAM framework for understanding, assessing, and facilitating teacher change refers to a behavioral progression in knowledge and skills associated with mastering the use of new programs and practices, described as Levels of Use. Progression from one level to the next is marked by key decision points and corresponding behaviors in several domains associated with the change: acquiring information, assessing, sharing, planning, status reporting, performance, and knowledge. Levels 0 (Nonuse), I (Orientation), and II (Preparation) describe the behaviors of teachers vis-a-vis an innovation before they actually begin using it in the classroom. Teachers at Level I, Orientation, are seeking or receiving information about the change, but have not yet committed (or been committed) to implementation, whereas at Level II, Preparation, a teacher is actively planning to begin implementing the program or practice at a later date. Once teachers actually begin to operationalize their use of the innovation in the classroom, they enter Level III, Mechanical Use. Teachers at this level are struggling with the logistics of implementation (e.g., lesson and resource planning, classroom management, record keeping) and with attaining basic mastery of the new teaching skills. Any changes they make in their use of the innovation are likely to be teacher-centered, that is, aimed at making use of the innovation more manageable and easier to practice. A teacher who establishes a pattern of regular use and who makes few adaptations in his/her use of the new program and practices is said to have attained Level IVA, Routine Use. Many teachers will settle in at a Routine Level of Use once the new program or practice gets integrated into their ongoing repertoire of teaching strategies, materials, and so on. Some teachers, however, may begin making adjustments in their use of the program or practice based on evaluations of its impact on students. This is characterized as Level IVB, Refinement Use. If they actively seek out and interact with other teachers to collectively and collaboratively modify their use of the innovation to improve student results, they are engaged in Level V, Integration, behaviors. Eventually, some teachers may exhibit Level VI, Renewal, behaviors. Teachers at this level are actively exploring alternative programs and practices or major changes in the innovation.

Similar to the Stages of Concern, the CBAM Levels of Use concepts and framework describe a possible – not an inevitable – progression of individual innovation user behaviors associated with mastering the implementation of new programs and practices in teachers’ work. As a developmental model of innovation user behaviors over time, however, the Levels of Use concepts and framework are more inclusive of alternative outcomes of use than the Stages of Concern. The behavioral model recognizes the practical reality that many educators engage in all sorts of professional learning experiences (Orientation) that lead to greater awareness and knowledge about programs, ideas, and practices that they may never end up implementing. It distinguishes people who are planning and otherwise getting ready to try out something new (Preparation) from those who are actually applying it in their work (Mechanical Users and beyond). Most importantly, the model accommodates the fact that some innovation users (perhaps most), after an initial period of mastering the logistics and basic skills required to implement the program or practice (Mechanical), will settle into a personally comfortable Routine Level of Use. The factors that lead some educators to engage individually or collectively in deliberate impact assessment and modification of their use of new programs and practices (Refinement, Integration) are not well understood. As noted for the arousal of impact-focused concerns, this may be less generally a function of individual professional orientations and skills than of workplace-specific norms and arrangements that give more or less emphasis to results. The original CBAM research and theory were developed prior to the contemporary curriculum content and student performance standards and accountability policy era. The incidence of impact-focused levels of user behaviors (Refinement, Integration, Renewal) linked to the implementation of new programs and practices may be more prevalent nowadays given the changes in the policy context. Again, the theory that supports this developmental model of change would benefit from further research.

The common sense appeal of the Levels of Use (and Stages of Concern) concepts and frameworks relates to their generic applicability to any new policy, program, and professional practices that require expected implementers to alter current professional beliefs and behaviors. Just because it resonates well with people’s practical experience, however, does not mean that it makes perfect sense as a developmental model of change. One source of persistent confusion has to do with the nature and definition of professional expertise as it relates to the implementation of new programs and practices. Implicitly we can infer that someone (e.g., a teacher) who has sufficiently mastered his/her use of a new program or practice to move from an assessment of Mechanical Use to Routine or Refined Use has attained a higher skill level. Some CBAM researchers, however, note that implementers may routinize the use of new programs and practices at sub-optimal levels of expertise (Anderson, 2006). In other words, they are implementing the practices on an ongoing basis, and are comfortable with the way they are doing it, but demonstrate low levels of understanding and skill in their use (and are probably not aware of that discrepancy).

Our understanding of teacher and principal growth from novice to expert generally and with regard to the use of specific teaching and leadership strategies remains poorly developed. When it comes to teachers, in particular, our notions of developing expertise are confounded with notions of fidelity and with compliance. Fidelity refers to the degree to which someone, such as a teacher, is implementing a program or practice in accordance with the way that program or practice is designed to be used (Fullan, 1982). Compliance adds the prescriptive expectation that particular forms or patterns of practice are not merely professionally desirable, but are formally required by some external authority (e.g., school system policy and/or administrators). Some change researchers and theorists have argued that it is appropriate to view and assess changes in teacher practices as a process of behavioral change that progresses incrementally toward conformance with ideal images of implementation, when supported by effective leadership, resources, and technical assistance (e.g., Leithwood & Montgomery, 1982). From this perspective, variability in the ways that teachers implement new programs and practices reflects variations in teacher understanding and skill in the use of those particular programs and practices. To the extent that these variations are conceived as a linear progression of behaviors that approximate a desired pattern of use, this represents a normative developmental model of teacher change over time. Others have similarly distinguished variations in teacher use of specific programs and practices as ideal, acceptable, or unacceptable relative to prescriptive definitions of what the innovation would look like in practice if implemented well, but without arguing that the variations represent developmental steps in mastering its use (e.g., Hall & Hord, 2006).

Our conceptions and understanding of variability and growth in teacher implementation of educational innovations are further complicated by the recognition that innovations are typically multi-dimensional (Fullan, 1982; Hall & Loucks, 1981; Leithwood, 1981). In broad terms, educational innovations for teachers may involve changes in materials (curriculum content, textbooks), practices (e.g., teaching or assessment strategies, grouping practices, classroom management), and beliefs (ibid). The exact nature and extent of change within each of these dimensions, however, is innovation and context specific. The adoption of a new textbook, for example, is a change in materials that may or may not fit with teachers’ prior beliefs and practices. Furthermore, for a group of teachers simultaneously learning to implement the same new teaching strategy (e.g., guided reading), the gap between their prior beliefs, understanding, and practices and those associated with use of the new strategy may vary in magnitude and complexity for different individuals. Leithwood (1981) proposed a generic framework of ten dimensions that might be implicated in the implementation of any change in teaching and learning (not all changes would necessarily affect all dimensions), and that could be used as a tool for comprehensively describing and assessing use of different components of a change. For our discussion here, the basic point to highlight is that for a given set of innovation users, implementation progress relative to expected and ideal patterns of implementation may vary for different dimensions. Considered from this perspective, the idea that teachers or anybody implementing changes in their professional practice may move through holistically defined but empirically identifiable stages or levels of concern and skill in their use of that change gets murky indeed. Intuitively, no one disputes that implementing changes in current practices is not a single event, but rather an evolution in attitudes, understanding, and behaviors for those involved over time. The theoretical concepts that we use to describe and explain this process, however, are not resolved.

Program and School Change as an Organizational Process over Time

The preceding section examines developmental theories of change in educational settings from the vantage point of the individuals attempting to implement changes in programs and practices. This section focuses on process theories concerning the implementation of new educational policies, programs, and practices over time more from an organizational perspective. Key sources for the original ideas date from the 1970s and 1980s in the research and writing of Berman and McLaughlin (1976; Berman, 1978, 1980, 1981), Fullan (1982, 2007), Fullan and Pomfret (1977), Miles and Huberman (Miles, 1983; Huberman & Miles, 1984), and a few others (e.g., Corbett, Dawson, & Firestone, 1984). There are four core ideas that have become ingrained in the discourse on educational change – (1) that change is an organizational process over time; (2) that the process can be described and explained in terms of three broad phases; (3) that activities associated with different phases are interactive, not necessarily sequential in time; and (4) that change over time is less a process of direct replication than one of mutual adaptation. This conceptualization of change as an organizational process over time has been applied to the investigation of educational changes that take the form of new programs (e.g., a new curriculum, a new textbook, a set of packaged set of activities and materials for a specific curriculum area) and new instructional strategies (e.g., cooperative group learning, particular assessment techniques, specific classroom management strategies), as well as to the study of the adoption and implementation of models for whole school reform.

First is the idea that change is a process and not an event (Fullan, 1982; Hall & Loucks, 1977). This idea emerged as a rebuttal to the misguided expectation by policy makers and external program developers that putting new programs and policies into practice was equivalent to the simple replacement of one technology with another, an event commonly referred to as innovation adoption. This concept worked well when applied to the diffusion and adoption of technological innovations (e.g., new types of seeds by farmers) (Rogers, 2003). Education change researchers discovered early on that public announcements declaring the adoption of new policies or changes in educational products (e.g., curriculum content, textbooks, program kits) and practices (e.g., team teaching, teaching methods) at the classroom, school, school district, or school system levels did not guarantee that practitioners at the local level would change what they were doing (Charters & Jones, 1973). As characterized by Berman (Berman, 1981; cf. Fullan, 1982, 2007), change is an implementation-dominant process not a technology-dominant process, and the progress and outcomes of the implementation process are highly contingent upon interaction of the innovation with local context factors (e.g., perceived need and motives for change, innovation quality and complexity, fit with prior practices and beliefs, funding, resources and working conditions to enable change, quantity and quality of technical assistance, leadership stability and skill, participation in decision making by key stakeholder groups, competing priorities and expectations).

In their early research and writing, Berman and McLaughlin (1976) employed the concept of “stages of innovation” to characterize the overall organizational process through which school district and school personnel engage in efforts to replace, modify, or supplement current professional practices with new ones over time. They defined three stages: initiation, implementation, and incorporation. Each stage is associated with different activities and decisions concerning the selection, use, support, and progress in putting the change into practice on the part of local actors in their respective roles. Initiation encompasses decision-making activities about the reasons for change, selecting solutions (new programs and practices), implementation planning, and seeking resources. Implementation refers to the stage during which local educators are actually attempting to put the selected change into practice. Typically, this involves activities that lead to adaptations in the innovation as well as changes and modifications in the organizational setting and behaviors. Incorporation refers to activities associated with the continuation of what was originally a change into ongoing organizational routines and work practices. Berman and McLaughlin noted that decisions and actions at earlier stages affect what happens at later stages. From their research on the implementation of some 280 federally funded educational change projects in the United States, they concluded that while the focus of change was generally predictable from the content of the change initiative, the actual progress and outcomes of change were highly dependent upon local decisions and actions vis-à-vis its adoption, use, and continuation and upon the degree of specificity or uncertainty about the image of what the change should look like once put into practice.

Fullan (1982) nudged the conceptualization of the change process in organizations away from the linear notion of stages. He referred instead to three broad “phases” of change: initiation (also referred to as adoption or mobilization), implementation (or initial use), and continuation (cognate terms include incorporation, routinization, institutionalization). While he did not explain his decision to employ the concept phase instead of stage, his explanation of this model of the change process clearly indicates that he was striving to develop a way of thinking and talking about change in organizational practices that could account for the fact that it is “not a linear process,” even though it occurs over time. Like Berman and McLaughlin, Fullan asserted that what happens at one phase strongly affects events and outcomes at later phases. But he added the nuance that events associated with a particular phase can feed back into and alter decisions and actions taken previously, and employed two-way arrows in a conceptual diagram to try to capture the interactive relationships between actions within each phase, as opposed to portraying change as a deterministic causal chain of events. Nonetheless, the metaphorically sequential image of a change progressing through the phases over time remained powerfully embedded in this conceptualization of change. In a later work, citing the research and thinking of Matthew Miles, Fullan further elaborated on what he then characterized as the “Triple III” model of change: initiation, implementation, and institutionalization (Video Journal of Education, 1992). Implementation success (defined as putting the change into practice and sustaining that practice) depended upon the quality of attention and action given to distinct conditions and activities associated with each phase: Initiation (high-profile need, clear model of change process, strong advocate, active initiation); Implementation (orchestration, shared control, pressure and support, technical assistance, rewards); and Institutionalization (embedding, links to instruction, widespread use, removal of competing priorities, continuing assistance).

Berman (1981) reconceptualized his original stage constructs of the change process as “sub-processes” related to specific functions and activities within an organizational system. According to this organizational systems view, a change can be said to occur when existing organizational routines are replaced or modified such that the system enters a different state of organizational behaviors and attendant relationships, materials, and so on, depending on the content and scope of the change. While this occurs over time, Berman deliberately avoided the language and images of linearity in the activities associated with the three sub-processes: mobilization, implementation, and institutionalization. The sub-processes co-exist as change-related functions in the organization, and the activities linked to those sub-processes can overlap in time and interact in mutually influential ways. The activities associated with certain sub-processes, however, may be more prominent in the actions of local actors at different times in the history of a change initiative, and the roles that those actors play in the change process can vary for different sub-processes. Mobilization activities include developing an image of the desired change (e.g., needs assessment, goal setting, product adoption), planning for implementation, and lobbying internally and externally for support (commitment, political support, resources, etc.). Implementation encompasses two broad functions that local educators engage in as they attempt to put new programs and practices into action – clarification and adaptation. Clarification is linked to activities such as professional development that help implementers figure out exactly what and how to do the change and how it differs from what they were doing before. Adaptation refers to local activities that lead to modifications in the content or design of the change as originally presented, as well as to the changes in behaviors and knowledge that they experience as a result of the process. Institutionalization happens when a system stabilizes into a changed state of routine behaviors, and is manifested through activities that demonstrate the assimilation of new practices into the ongoing behaviors of organizational members affected by the change, and by incorporation of these new routines into associated organizational decision-making processes (e.g., budget, staffing, support services). For purposes of this discussion, the key idea advanced by Berman is that organizational change is more appropriately conceived of as a change of state in an organizational system of behaviors, arrangements, and processes that occurs as a result of actions taken within different sub-processes of the system, but not as a predictable progression through developmental stages or phases over time. Berman’s ideas foreshadowed much of the contemporary thinking about schools and school systems as complex adaptive systems, but these ideas did not catch on at the time.

What did capture the attention of educational change scholars and practitioners was the idea that various implementation outcomes were possible (where outcomes refer to the use of new programs and practices, not to the effects of their use on students or organizational effectiveness and efficiency). Berman and McLaughlin (1977; cf. Berman, 1978) distinguished four possible outcomes, differentiated in terms of the changes that result through the implementation process in implementer behaviors and in the new program or practices, i.e., the innovation. Non-implementation (or symbolic implementation) describes a state in which no change occurs either in implementer behaviors or in the innovation. Co-optation describes a situation in which the implementers modify the new program or practice to conform to what they were already doing, resulting, as well, in no substantive change in organizational work practices (though sometimes the users adopt new ways of talking about what they do that promotes an illusion of change in beliefs and behaviors). Berman and McLaughlin (op cit) reported that mutual adaptation was the most common implementation outcome associated with successful change. Under these circumstances, the implementation process results in changes in implementer behaviors in the direction of those envisioned by the innovation developers and promoters, as well as in adaptations in the innovative program or practice in response to local circumstances. The fourth implementation outcome, technical implementation, refers to the rational planning image that implementers of an innovation will alter their existing behaviors in compliance with the ideal forms of practice as specified in new policies, programs, or practices, with minimal changes in the design, content, and procedures of the innovation. Berman and McLaughlin reported that they did not actually find examples of this outcome in their investigation of the implementation of federally funded educational innovation projects in the United States (ibid). Other education change researchers, however, argued that the change projects that Berman and McLaughlin studied simply did not include procedurally specific programs and practices that were known to yield demonstrably positive effects if faithfully implemented as designed (Crandall, 1983) when conditions conducive to successful implementation were in place (e.g., leadership, good training, resources). While it is debatable whether any new program or practice is ever exactly replicated by users in different settings, the idea that the quality of education could be substantially improved if only teachers and principals would carefully replicate “best practices” that have worked well for desired educational goals in schools serving similar students with similar resources remains deeply ingrained in the discourse on educational change.

Fullan drew a distinction between two organizational approaches to implementation – a fidelity approach and an adaptive approach (Fullan, 1982, 2007; Fullan & Pomfret, 1977; cf. Berman, 1980). The fidelity approach is most appropriate when procedurally clear new programs and practices are introduced in settings where there is a good match between local needs and goals and the selected change, where local resources and conditions are adequate to support the implementation of that change as designed, and when the likely effects of innovation use have been previously demonstrated in similar settings. Under these circumstances, organizational expectations and support for change may aim for the ideal of technical implementation of the change, whether that outcome is achieved or not. The adaptive approach is more appropriate when the technology of innovation use is not well specified, the claimed benefits of implementation are not well supported by evidence, and the local needs and resources conditions are not well matched to the change. Under these circumstances, the expected outcome would be mutual adaptation.

Whether by design or by default, mutual adaptation remains the most realistic conceptualization of what happens when educators genuinely attempt to implement new ideas, programs, and practices, i.e., changes occur both in implementer behaviors and in the innovation as initially conceived and designed by those promoting the change. Has our understanding of the process of mutual adaptation evolved since the original formulation of these ideas in the 1970s? The simple answer is not much. Analysis and discussion of mutual adaptation as a phenomenon has tended to focus less on the “mutual” dimensions of adaptation, than on whether and how implementers alter the change as originally introduced. The most common strand of inquiry and discussion reaffirms the idea already noted that under certain conditions (e.g., an uncertain technology, poor fit between the innovation and the “problem” it is supposed to address, inadequate resources, ineffective leadership and assistance) the degree of adaptation to the innovation will be greater than under the opposite conditions. From this perspective, mutual adaptation is commonly characterized in quantitative terms as a matter of degree. Berman and McLaughlin (1977) also used the term mutation to describe what happens when implementers modify the design and content of a change as they put it into practice. Hall and his colleagues introduced the idea that for innovations that are procedurally well specified, there can be a point of “drastic mutation” beyond which so much modification has occurred in the program or practice as initially presented that it is no longer appropriate to claim that the original innovation has been implemented (Hall & Hord, 1987, p. 137). No one, however, has presented empirical evidence to suggest any uniform or alternative stages or developmental patterns in the process of mutual adaptation over time.

Datnow, Hubbard, and Mehan (2002) present a more elaborated conception of mutual adaptation in which context constitutes the critical explanatory dimension, rather than characteristics of the innovation, the implementation support system, and time. Their research and analysis focused on the fate of changes (e.g., comprehensive school reforms) originating externally to schools and school districts attempting to put them into practice. While employing the familiar language of reform adoption, implementation, and sustainability (i.e., continuation or institutionalization) to organize their account and analysis of change over time, they reject technical rational linear conceptions of the change process. They define implementation simply as “doing the reform,” and building upon the earlier work of Berman and McLaughlin, Fullan, and others, they argue that implementer adaptation of new policies, programs, and practices in relation to varied components or dimensions of local context is the normal process of change, even in situations involving highly prescriptive innovations. Their theoretical and research-based conceptualization of context and the adaptation process, however, adds complexity and depth to our understanding of this phenomenon. First, they propose that mutual adaptation might be more appropriately conceived of as a process of “co-construction” between those who design, advocate, or facilitate the implementation of a change and those expected to participate in enacting the change. Second, they argue that this co-construction process is subject to the varied interests, actions, and influence of all stakeholders implicated in implementation decisions and actions acting from the situational position of their particular roles and social contexts Third, they argue that context is often misconceived as a system of lower levels (e.g., classroom, school) embedded within higher or broader levels (e.g., district, community, state). This metaphor tends to promote hierarchical and unidirectional perspectives on implementation in which local actors are portrayed as simply reacting to changes and pressures originating from external sources. Datnow, Hubbard, and Mehan argue instead for what they call a relational sense of context. From this perspective, people implicated in different functions of the overall enterprise of public education – e.g., state policy making, state education agency activities, district office work, school administration, classroom teaching, parental and community involvement, being students – each enact their role in particular social contexts. These social contexts co-exist in interconnected sets of relationships. Actions taken in one context create outcomes and conditions which can permeate through these interlocking relationships to influence subsequent actions in other contexts in unpredictable ways. The unpredictability arises in part from the unique histories, socio-cultural characteristics and relationships, and social structural conditions of the different interacting contexts. In order to understand mutual adaption in the implementation of educational change, one has to examine the interconnections among these contexts and how people involved in implementation respond in terms of the specific characteristics of the contexts within which they play out their roles in the process. The overall process (inter-contextual connections, communication between contexts, and prevailing responses within contexts) is strongly influenced by those actors whose organizational, political, or social positions allow them to exert the most power over how reform efforts and responses to them are defined and the corresponding courses of action that are taken. This relational and dynamic view of actions taken within and between interlocking contexts does not privilege a priori the influence of actions taken in one context over another. Change is multi-directional, not unidirectional. Datnow, Hubbard, and Mehan provide examples of local adaptations of school reform initiatives to a variety of structural and cultural contextual conditions – school organizational constraints, overlapping reform initiatives, state and district policies, linguistic diversity, and educator beliefs about student abilities, teaching and learning.

Datnow, Hubbard, and Mehan’s account of the mutual adaptation process is consistent with complexity theory perspectives on social organizations as complex adaptive systems in which change occurs as a non-linear dynamic process over time (Kauffman, 1995; Waldrop, 1992; cf. Fullan, 2003). Actions taken in any specific socio-organizational contexts that are interlinked and implicated in adopting, implementing, and sustaining the change have unpredictable effects (including no effects) on organizational conditions and actions in other contexts. To posit predictable stages and outcomes of implementation is meaningless in this view. The overall model of the implementation of school reforms and programmatic changes in educational settings, however, preserves the basic distinction in chronological time between deciding to change (adoption), doing the change (implementation), and sustaining (or abandoning) the changes over time.

All analysts of the process of planned changes in education talk in both a chronological time and an organizational sense about the continuation or sustainability of changes in programs and practices beyond early experiences with implementation. While there is no fixed timeline, the basic idea is that some innovations lead to enduring changes in the way educators go about doing their work; that is, they become routine features of ongoing practice. Others only lead to temporary modifications in behaviors that are abandoned after some recognizable period of initial use. Changes may be abandoned for any number of reasons – e.g., loss of funding or other resources required to sustain the program or practice, evidence or perceptions of ineffectiveness, low leadership pressure and support, the presence of other priorities competing for people’s time and energy, and staff turnover. As previously reviewed, change researchers and theorists have identified a number of organizational conditions and management practices and innovation characteristics that affect the likelihood that a given change in a particular setting will be sustained or not (Anderson & Stiegelbauer, 1994; Berman, 1981; Fullan, 1982; Miles, 1983). The key point is the idea that some efforts to change do result in what systems and complexity theorists refer to as a state change for the people and organizations involved. That is, the changes become more than passing perturbations in the way people conduct their work.

The idea of a change in state (as opposed to a stage or phase in change) makes sense, but is not without its own conceptual and empirical conundrums. One has to do with the multi-dimensionality of change. Thus, some components of a change may get institutionalized and sustained as a feature of ongoing practice while others do not. Second has to do with the loosely coupled nature of schools and school systems as organizations. Thus, a change that affects multiple settings (classrooms, schools, district offices), or multiple contexts as conceptualized by Datnow and her colleagues, might get sustained in some contexts but not in others. Even in those where it does carry on, it is likely to take different forms as a result of the contextually sensitive adaptation process.

Third has to do with the magnitude of the change in terms of the actual difference it makes in prior patterns of work for the educators involved. Numerous analysts of planned educational change draw a distinction between changes which may result in people refining existing practices, replacing existing practices, or adding new practices to existing patterns of work, but which do not alter the fundamental nature of that work. Elmore (1995) describes this as the difference between first-order and second-order change. The idea of changes and improvements that are more profound and far reaching in their consequences for how schools and school systems are organized, the professional work of educators, and the nature and outcomes of student learning, than simply changing materials, learning a new teaching strategy, enabling people to work together (rather than individually) to try to improve what they do, and so on, is intellectually and politically appealing, but challenging to define and identify empirically. Perhaps we will know it when we finally experience it? Suffice it to say that most educational change initiatives are more about modifying the existing state of school organization and educational practice than about fundamentally changing that state. Conceptually and instrumentally, the idea of a state change in education runs into difficulties when we try to define the parameters and boundaries of the phenomenon or system that is potentially undergoing a non-trivial change in “state.” These concepts are hard to apply at the organizational levels of schools, districts, or state/national educational systems.

Regardless of the organizational level or magnitude of change at hand and in mind, the long-standing notion of institutionalization as a final stage or phase of planned change is challenged by the contemporary ideology of continuous improvement in the context of standards-based and results-oriented education accountability systems. The idea that even new programs and practices that are successfully put into practice may eventually be subject to major modifications or replacement was noted long ago by the developers of the Concerns-Based Adoption Model, in the form of the Refocusing Stage of Concerns and Renewal Level of Use behaviors (Hall & Hord, 2006; Hall & Loucks, 1977, 1978). Crandall, Eiseman, and Louis (1986) posed the question of whether institutionalization or renewal was the more appropriate organizational goal for the introduction of school improvement–oriented policies, programs, and practices. Over the past 20 years, the entrenchment of national and state accountability systems linked to curriculum content standards, student performance standards, student performance targets, large-scale testing of student performance, and mandatory consequences (rewards, assistance, sanctions) at the school and district levels based on evidence of performance is fueling and sustaining the idea of continuous improvement in the quality of teaching and learning in schools.

Drawing upon studies of sustained (5–10 years) improvement efforts at the school and district levels, Anderson & Kumari (2008) distinguishes the organizational practice of continuous improvement from the evidence of impact over time on student learning and the quality of teaching. They report that schools and school districts that engage in sustained improvement efforts may evolve through successive phases of improvement marked not only by the introduction of new or revised instructional programs and practices, but also by changes in the organizational structures and processes to support ongoing change, when there is compelling evidence that further improvement requires rethinking the existing support system for improvement. The latter point is key. It arises from the recognition that sustained improvement in student learning can stall in two significant ways. First, the support system as currently organized may reach a limit in terms of its capacity to effectively reach and provide ongoing support for improvement to all teachers, principals, and schools that it is intended to serve. Second, after a period of change, student learning levels can reach a point where evidence of improvement plateaus (cf. Fullan, 2003; Hopkins, 2007). Further improvement will not be accomplished simply by doing more of the same. These findings are discussed further in the succeeding section on educational change at the system level (district, state, nation).

System-Wide Change and Improvement in Student Learning

The idea of continuous improvement as applied to educational change has brought student learning outcomes more explicitly into theories and models of change. But what does it mean for student learning to continuously improve in a school, a school district, a school system? Is it incremental growth on set indicators of academic achievement for all students? Is it mainly about bringing low-performing students up to the level of their higher performing peers? Does it involve changing the standards and expectations as student performance rises? Does it happen in ways that can be characterized as phases, stages, or changes in state? Empirical and conceptual accounts of student learning over time in the context of educational change are recent and associated mainly with studies of large-scale reform at the state/national and school district levels (Fullan, 2000). Some well-known and researched examples at the district level include the decentralization reform in the Chicago School Systems (Bryk, Sebring, Kerbow, Rollow, & Easton, 1998; Simmons, 2006) and the case of Community School District #2 New York City (Elmore & Burney, 1997). Longitudinal investigations of improvement at the state/national level are more difficult to come by. Two prime examples are an evaluation of the National Literacy and Numeracy Strategies in the United Kingdom (Earl et al., 2003; cf. Fullan, 2003; Hopkins, 2007) and the controversial accounts of and debates about state-wide improvement and equity in student achievement across Texas in the 1990s (e.g., Scheurich & Skrla, 2001; Skrla, Scheurich, Johnson, & Koschoreck, 2001a; Valencia, Valenzuela, Sloan, & Foley, 2001).

The breadth and depth of longitudinal research on large-scale reform at this level is insufficient to generalize with much certainty about patterns of change over time. We can, however, highlight some key findings and ideas emerging from this research. One is the phenomenon of plateaus in the trajectory of aggregate improvement in student learning over time. While this has been noted in long-term studies of school-level improvement (e.g., Anderson & Kumari, 2008; Anderson & Stiegelbauer, 1997), it is more profoundly evident in evaluations of system-wide reforms involving large numbers of schools and districts. Analysts of the British government’s Literacy and Numeracy Strategies reform, for example, chart significant improvements in the percentages of elementary school students performing at or above government-prescribed standards on standardized tests of reading and mathematics during the first 3 years (1997–2000), a reduction in the gap between higher and lower performing students, and a phenomenal scaling-up of the number of schools and local education authorities reporting these positive results (the story and data are reviewed in Hopkins, 2007; also Fullan, 2003). Student performance across the system, however, leveled off for about 3 years and only began to rise again around 2004 and 2005. Hopkins attributes the early gains to the government’s success in designing and intensively supporting a rigorous standards-based national curriculum development and implementation reform. In short, a national infrastructure of policies, resources, training, technical assistance, and monitoring to support implementation of the literacy and numeracy initiatives was effectively put into place. Citing the reform’s director, Michael Barber, Hopkins refers to this period of the reform as a time and strategy of informed prescription. Informed prescription worked to get the curriculum reform into place with significant gains in student learning, but did not result in the ideal of continuous improvement once the initial gains settled in. Hopkins attributes the revitalization of improvements in student performance after a 3- or 4-year plateau to a deliberate shift in the government’s strategy for improvement to what Barber conceptualized as informed professionalism. The impetus and support for ongoing improvement was redirected from a dependency on external direction and expertise to developing local leadership for improvement, and to encouraging and supporting lateral networking among schools and school personnel about promising practices and solutions to locally contextualized needs and challenges for improvement. The government reorganized its support for improvement less around technical implementation of the literacy and numeracy reforms, and more around developing and sustaining the capacity of school personnel to lead and make improvements together. For purposes of this discussion of phases, stages, or state changes in the process of educational change, the exact details of this shift in government strategy are less relevant than the evidence of the plateau effect in improvement student learning over time, and the British government’s strategic decision that further improvement meant rethinking and reorganizing the support system for change within the parameters of national goals.

The student achievement plateau phenomenon, followed by a restructuring of the system support system and then by renewed evidence of student performance gains, is also reported for the Ontario government’s literacy initiative (Campbell & Fullan, 2006) and in longitudinal analyses of decentralization reforms, district organization and support, and student outcomes in the Chicago school system (Bryk et al., 1998; Simmons, 2006). The Chicago case adds some additional complexity to this pattern. As recounted by Simmons (2006), the Chicago reform has moved through three phases of improvement relative to student performance and to the district role and relationships with schools. Each phase of reorganization was preceded by a period of system-wide improvement in standardized test scores leading to a 2- to 3-year plateau in student performance gains. The complexity in this picture arises from the fact that the improvement gains varied for different sets of schools. Focusing on the low-performing elementary schools in 1990 (82% of the city’s 429 regular elementary schools), Simmons shows how test scores declined initially in all these schools, began to rise in 1992, and plateaued 1993 and 1995. Among these schools, however, Simmons identifies half as “high-gain” schools that showed evidence of significant improvements in student performance, while the other half were “low-gain” schools that showed minimal overall improvement in this phase. The scores leveled off for both sets of schools, but at different performance thresholds. Following a partial recentralization of the district authority and reorganization of district direction, support, and intervention for school improvement, student achievement scores improved significantly among all these schools from 1995 and 1999, but stalled again between 1999 and 2001, leading to another reorientation and reorganization of district-level involvement in supporting ongoing improvement efforts in the schools. This change was followed by renewed evidence of improvement in the high-gain schools, but did not have an effect on the stalled achievement test results in the low-gain schools. Again, our purpose here is not to explore the details of the district improvement strategies and their evolution over time, but rather to highlight some patterns of change associated with a long-term system-wide improvement effort. The Chicago case reinforces the expectation that a system-wide improvement strategy is likely to result in short-term improvement in student performance followed by a leveling off or plateau in student learning gains, and that further improvement may require strategic rethinking and reorganization of system-level leadership and support for change at the school level. The difference in the Chicago case is the recognition that the pattern of gains and plateaus may vary for schools in varying circumstances across the system. Thus, the support system for improvement has to become increasingly differentiated in response to the performance trends and circumstances of individual schools and sets of similar schools. Elmore and Burney (1997) also talk about the development of a district approach to improvement in NYC District #2 that became increasingly responsive to differential progress in achieving school improvement targets in the context of district-wide goals.

A different scenario of wide-scale improvements in student performance over a sustained period of time occurred in Texas during the 1990s and into the current century. The history of this process and controversies surrounding the social and educational implications of the results are widely documented, e.g., Haney, 2001; Klein, 2001; Scheurich & Skrla, 2001; Skrla et al., 2001a, 2001b; Valencia et al., 2001. Texas was one of the first states in the United States to introduce a standards-based curriculum aligned with a state accountability system that included annual criterion-based testing of student performance on the curriculum, state-mandated performance indicators and reports, and public ratings of schools and school districts on the basis of student performance (aggregated and disaggregated by student characteristics, such as race and family income). Over a 10-year period, schools and districts across the state charted remarkable gains in student achievement on the state tests, and a significant narrowing of gaps in performance between racially and socio-economically different sub-groups of students. Controversy surrounding the results centered on claims that the state curriculum standards and tests were set at a low level of expectations for student learning, that Texas students did not perform nearly as well on nationally normed tests, that the state education agency inflated performance ratings by manipulating minimum pass standards, that the accountability pressures led teachers to concentrate classroom instruction more on preparing students for the tests than on learning per se, and that the claimed improvements in student learning, particularly for minority and poor students, were more illusory than real. By 2001, as seen elsewhere, student results had plateaued, but had plateaued at relatively high levels, with many schools and districts reporting 80% or more of their students performing at or above the state’s minimum standards for acceptable performance in reading, writing, and mathematics. The state’s response at this point was not to rethink and reorganize its support system for ongoing improvement under the existing curriculum regime. Instead, the state introduced a more challenging curriculum and testing system. In essence, the state raised the bar of standards and acceptable performance. The immediate effect was a decline in student, school and district performance levels. This created a new context and stimulus for improvement (and an impression that some schools and districts that were high performing under the old system were not so effective after all). Here is not the place to engage in the debate on the educational and social significance of the Texas miracle from 1991 to 2001 (see the works cited). The Texas case is, however, important to this discussion of the conceptual, methodological, and political complexities of measuring and judging continuous improvement in student learning over time. It reminds us of the implications of stability and change in how we assess and judge the quality and change in student learning over time. It also illustrates that when confronted with what may be an inevitable leveling off of gains in student learning across a system, system authorities can respond in different ways. In England and in Chicago, they reoriented and reorganized the external support systems to achieve better quality implementation within the existing curriculum and accountability system. In Texas they changed the curriculum and performance standards, with no major shift in state support for implementation of altered expectations and accountability requirements. It remains to be seen whether Texas schools will, on a wide scale, register renewed gains with a more challenging curriculum and performance standards, but low investment at the state level in whether and how this might require change in the infrastructure of system support for improvement at the school and district levels.

Concluding Remarks

The aim of this chapter was to review and discuss different ways in which education change researchers and analysts have conceptualized, studied, and explained the process of change over time, particularly in terms of successive stages, phases, or states. Popular concepts used to make sense of change over time were discussed as an individual phenomenon and as an organizational phenomenon at the level of schools and school systems (district, state, nation). These included the developmental schema of affective Stages of Concern and behavioral Levels of Use applied to individuals implementing innovations associated with the Concerns-Based Adoption Model (Hall & Hord, 2006); the three-stage/three-phase mobilization, implementation, and institutionalization model of planned changes in program and practices in organizations (Berman, 1981; Berman & McLaughlin, 1977; Fullan, 1981, 2007); continuing developments in understanding the phenomena of mutual adaptation (Datnow et al., 2002) and the sustainability of change; and recent attempts to conceptualize and describe what continuous improvement looks like at the school and school system levels in terms of both student outcomes and system-level organization (e.g., Anderson & Kumari, 2008; Fullan, 2003; Hopkins, 2007). While many of the concepts reviewed are well known and often applied, this review draws attention to some of the knotty conceptual problems associated with their application to empirical findings from research on educational change. On the basis of this review, I argue that the fit of these theoretical concepts to practice should not be taken for granted by education researchers and practitioners. More research effort is needed to deepen theoretical development along these lines in our ongoing efforts to construct a discourse that accurately describes and explains educational change. In sum, as knowledge workers in the field of educational change, we need to continually challenge and refine our conceptions and explanations of the change process over time.