Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

In his remarkably prescient study, The New Meaning of Educational Change, Fullan and Stiegelbauer (1991) challenged policy-makers to pay more attention to the accumulated wisdom of research. “Armed with knowledge of the change process and a commitment to action,” they argued, “we should accept nothing less than positive results on a massive scale – at both the individual and organisational levels” (p. 354). It was a call to arms which the New Labour government, under Prime Minister Tony Blair, was to take very seriously.

In this chapter I review some of the major developments which happened over the 10 years of the Blair administration from 1997 to 2007 and attempt to draw some broad conclusions. What were the key levers of educational change and what did they produce? Crucially, was the fabric and infrastructure of contemporary English schooling transformed in ways that have subsequently proved self-sustaining?

In calling for change, however, Fullan also counselled realism. There is a “huge negative legacy of failed reforms” littering the educational change literature. To rise above this legacy policy-makers would need more than “good intentions” and “powerful rhetoric.” They would need not only to initiate reforms but to develop the capacity to learn from them.

The Commitment to a Third Way

Blair had made his election mantra “education, education, education.” Consequently, many in the education professions assumed that when New Labour took office there would be a return to the values that had informed “old” Labour; although after some 17 years out of office, what precisely these values were beyond platitudes about equal opportunities and meeting the needs of the educationally disadvantaged had largely faded from memory.

New Labour’s strategic thinking has been described as the “Third Way”. Anthony Giddens, widely credited as its intellectual architect, has argued that “investing in human capital” is a central “tenet” of Third Way thinking and that the “key force in human capital development obviously has to be education. It is the main public investment that can foster both economic efficiency and civil cohesion” (2000, p. 73). In short, he suggested, there were sound economic and social reasons for prioritising expenditure on education.

Giddens was dismissive of suggestions that Third Way thinking was less preoccupied with inequality than earlier Labour ideologies. It is concerned, he argued, “both with equality and pluralism, placing an emphasis on a dynamic model of egalitarianism … focusing primarily upon equality of opportunity but stressing that this also presumes economic redistribution” (2000, pp. 120–121). Inequality can be particularly addressed through strategies for tackling the needs of the long-term poor. “Enduring poverty,” he concludes, “is usually coupled to exclusionary mechanisms and hence affects most aspects of life.” Children from poor backgrounds get a raw deal in the womb and suffer “abuse and neglect” at home. These “disadvantages,” he suggests, “carry on through their education or lack of it.” “Schools in poor neighbourhoods are often under-funded (and) staffed by demoralised teachers, who have to concern themselves with keeping control in the classroom rather than with instruction” (op cit., p. 114).

The policies can almost be read off from the diagnosis – more funding for disadvantaged schools, greater attention to the sources of teacher morale, more focus on the sources of pupil disengagement which underpin unruly behaviour and, perhaps, more support for good teaching; in short, more investment in education. Interestingly, Giddens didn’t have much to say about mechanisms for holding schools to account.

Whether there is anything distinctive here in Third Way thinking as regards the education of the socially and educationally disadvantaged is a moot point. There are strong echoes of the Plowden Committee’s recommendations some 30 years earlier which lamented the fate of poor children caught up in “a seamless web of circumstances” in which one disadvantage was compounded by another (Plowden, 1967, para. 131). Although our understanding of what school improvement involves has moved on over the ensuing period, the underlying diagnosis is strikingly similar.

Where the Third Way has differed substantially from its predecessors has been in its thinking about how school improvement might best be pursued. In seeking to mobilise the forces for change New Labour has shown itself to be very flexible in interpreting the inheritance. More autonomy has been given to schools (where they have been prepared to take it) and public–private and public–voluntary partnerships have been fostered as a means of hastening reform. At the same time there has been some recasting of the lines of communication. In the process, central government has assumed more of the roles that, under earlier divisions of labour, had been reserved for local authorities.

Five Strategic Challenges

When New Labour assumed power it faced at least five strategic challenges in developing its reform programme.

First, what organisational structures of schooling to maintain and promote. The Conservatives had never completed the comprehensive reforms Labour had initiated. With a view to offering parents greater “choice and diversity”, they had encouraged a range of different types of school to develop with a particularly prominent role for the grant-maintained sector. Should New Labour revisit its earlier agenda, accept the Conservative inheritance or take off in new directions? In the event, the commitments to “parental choice” and a diversity of school types were retained with the establishment of new Academies, eventually, extending the range.

Second, how to improve individual schools, especially those serving socially and educationally disadvantaged populations. There were considerable differences between schools in performance and some were getting left behind. How could stronger and more robust processes of school improvement be developed that were capable of creating “continuous improvement”? And what might they cost? Through an extensive system of target-setting individual schools were put under pressure to improve their performance. At the same time they received enhanced resources and other forms of support with a view to encouraging innovation and further development.

Third, how far to intervene in the ways in which schools taught the National Curriculum through national initiatives. The Conservatives had already started to pilot a National Literacy Strategy in primary schools. Should this kind of approach be extended to other core skills such as numeracy as well as to the secondary school? The decision to proceed was taken quickly; the Literacy Strategy went national, the National Numeracy Strategy started in 1999 and was followed by the Key Stage 3 (11–14) strategy in 2001. A major programme of additional funding for schools serving areas of social disadvantage was also launched under the Excellence in Cities initiative.

Fourth, how to secure teachers’ commitment to the reform agenda. Teachers’ pay had fallen behind that of other professions, some schools were experiencing severe recruitment and retention problems, and relationships between at least one of the unions (the largest – the National Union of Teachers) and the government had been fraught. In response a series of workforce reforms began to be implemented with the intention of raising teachers’ pay and status and generally “modernising” the teaching profession.

And fifth, how to use strategies for accountability to support the improvement process. The Conservatives had claimed that league tables of schools’ results and Ofsted inspections informed parental choice, held schools to account and encouraged them to improve their standards. Should these be maintained or modified? Again the decision was quickly reached to retain all of them, albeit with some modifications. Meanwhile a heavy programme of traditional school inspections was implemented until, in 2005, a so-called “self-evaluation” component was incorporated into a potentially lighter but conversely more frequent inspection regime.

Building on the Inheritance

In the event, New Labour’s strategy for policy development was to build on much of the Conservative legacy that had emerged during the first half of the 1990s whilst selectively introducing new thinking. There was no radical break with the past. To the charge that such an approach simply represented a “middle way”, Giddens has argued forcibly that “a concern for the centre should not be naively interpreted … as a forgoing of radicalism or the values of the left.” “Many policies,” he writes, “that can quite rightly be called radical transcend the left/right divide. They demand, and can be expected to get, cross-class support – policies in areas, for example, such as education, welfare reform, the economy, ecology and the control of crime” (2000, p. 44).

There were several respects however in which New Labour sounded distinctly different from its predecessors. First, and perhaps most importantly, it initially “talked tough”. Borrowing from the language of crime reduction there was to be “zero tolerance for failure” and the pursuit of higher standards was to be “relentless”. In an early move, shortly after assuming office, the secretary of state for education “named and shamed” ten of the worst-performing schools in the country, seemingly oblivious to the fact that a substantial part of the explanation for their apparent “failure” might lie in the socially blighted circumstances of the communities they served. Second, there was to be a much greater emphasis on results; the way to value and develop the outputs of education was to measure them. Third, there were arguments for more “joined up” thinking. And fourth, it began to talk up the case for the so-called “evidence-based” practice. In future educational policies would be researched and evaluated. In the course of these developments the rudiments of systemic thinking began to emerge.

The Court of Measurable Results

Evaluating educational reforms is a potentially complex business but, under New Labour, policy-makers have usually been quick to draw attention to improvements in measurable results. Over the course of the decade the percentages of primary school pupils achieving Level 4 in English (the “expected” national level of performance for an 11-year-old) climbed from 63% in 1997 to some 80% in 2007, an overall increase of 17% (see Table 1). Similarly with respect to maths the percentage rose from 62% to 77% over a decade.

Table 1 Percentages of primary school pupils age 11 in England securing “expected level of performance” (Level 4) on headline measures at key stage 2 (1997–2007)

These figures look impressive. However, when one tries to discern patterns and trends and offer explanations, the picture becomes more problematic. First, some comment about the timing of the changes is necessary. It takes a while for new policies to be rolled out and take effect. A new government’s inheritance in its early years is inevitably dependent on the activities of its predecessors. New Labour was powerless to do anything about the 1997 results and it was not until 2001 that the national test results were picking up the full effects of any new policy developments.

Looked at from a statistical point of view, the trends in results seem to fall into three distinct phases. In the first phase, there was a 12% point increase in English over the period 1997–2000. In the second phase (2000–2003) the results plateaued with zero growth followed by a more modest phase in which over the next 4 years (2003–2007) results increased by a further 5%.

Looked at another way, however, more than half the growth over the decade occurred amongst just two cohorts (1994–1998 and 1995–1999). The fact that these improvements were sustained over subsequent years suggests that they were neither illusory nor ephemeral – if they had been one might have anticipated some backward steps or a rougher ride. But drawing conclusions about the effects of policy development is more problematic. What is most notable about this period is that it was one of transition – politically from Conservative to New Labour but, perhaps more importantly, from a situation in which the nature and rules of the game were rapidly changing as schools became more and more caught up in the newly emerging “performance culture” New Labour was developing.

Further support for this emphasis on the impact of the “transitional stages” of the reforms comes from the evidence on pupil performance in maths. Here no less than half the growth over the course of the decade came from a single cohort; the performance of the 1994–1998 cohort had dipped to 59% but the 1995–1999 cohort bounced back with an impressive increase to 69%. Over subsequent years, however, progress slowed down considerably; whilst the results continued to creep up, year on year, the overall gains were less impressive.

Rising Achievement in Secondary Schools

Much of the international attention has been focused on reforms at the primary level. Table 2 reports the results of pupil performance in the national GCSE examinations taken at age 16. These are nationally examined and taken in a wide range of different subjects; pupils are entered for them according to their aptitudes and interests. The results are subsequently captured in a headline figure which reports the proportions of pupils who secured grades A* to C in five or more subjects.

Table 2 Percentages of secondary school pupils achieving the traditional headline measure of “5 or more A*–C grades” in GCSE examinations at age 16

There has been a fairly inexorable trend in the percentages climbing over the 5+ A*–C hurdle since the previously separate examinations for more and less able pupils were amalgamated into the national GCSE exam in 1988. The reasons for this increase are annually disputed when the results are published being variously attributed to declining standards on the part of the examiners, improved teaching methods, greater student commitment and so on. There has also been extensive use of what Gray et al. (1999) have described as “tactical” approaches.

The results for the decade under review are reported in Table 2. In 1997 some 45% got over the hurdle, by 2007 62% did so – in all a considerable increase. Nonetheless, the table again reinforces the view that educational policies are typically slow to unfold and take effect.

For most of New Labour’s first term in office the table simply records the results of its predecessor’s efforts. It is not until the turn of the millennium that the effects of the new regime can begin to be discerned. Over the period 1997–2000 the headline statistic improved at a rate of about 1% a year; this continued the seemingly inexorable trend launched in 1988. From 2000 onwards it improved a little faster; the figures unfortunately do not satisfactorily take account of numerous minor (but in combination important) changes in the arrangements for counting up what could contribute to the indicator these gave a particular boost to the results in 2005. The changes included such things as counting in some vocational subjects which had previously been excluded and allowing some qualifications to count as more than one grade.

Concerns that schools were focusing on “easier” subjects in order to boost perceptions of their pupils’ performance led to debates in the early 2000s about the need to include passes in maths and English in the basket of subjects to be counted in the headline measure (see last column of Table 2 above). Progress with respect to this indicator seems to have been steadier over the period. In 1997 performance lagged behind by some 10% points. Unfortunately, over the course of the decade, whilst the numbers reaching this hurdle rose by some 11% points, performance on this indicator fell still further behind. By 2007 the gap between the two had widened to some 15% points. Furthermore, the rate of improvement during the second half of the decade was only a little more rapid than that pertaining in the first half. On neither indicator did the step change in performance trends that had been anticipated actually emerge.

The Arena of International Comparisons

For most of the late twentieth century England was a sporadic and reluctant participant in studies based on international comparisons. It was largely content to evaluate itself in terms of its own national assessments. At the turn of the century, however, these predominantly isolationist attitudes began to change. They were reinforced, no doubt, by the impressive performance of its 9-year-olds in the Progress in International Reading Literacy Study (PIRLS, 2001) which seemed to offer welcome confirmation of the success of New Labour’s reforms. England came third out of 35 countries, just behind the Netherlands and Sweden. But trumpeting success was possibly premature; in the 2006 survey the country slumped to 19th position (Baer, Baldi, Ayotte, Green, & McGrath, 2007: Table 2). And whereas in 2001 the performance of English pupils stood out, by 2006, regardless of which measure was employed, they appeared merely average. It was possibly some consolation that performance in the Netherlands and Sweden also appeared to have declined, although not to the same extent.

There was some limited support for more optimistic interpretations of the performance of English pupils from the Trends in International Mathematics and Science Study (TIMMS) which studied 10-year-olds. The period 1995–1999 had seen little or no change in performance levels in primary schools but between 1999 and 2003 they leapt up (Ruddock et al., 2005).

The vagaries of such comparisons, however, are underlined by some of the results from the OECD’s Programme for International Student Assessment (PISA). In science the United Kingdom’s students (not just England) performed significantly above the OECD average (OECD, 2007: Table 2). In reading the United Kingdom appeared in the upper half of the table but its performance was not significantly different from the OECD average (OECD, 2007: Table 4). Performance, meanwhile, in maths in 2006 was also not significantly different from the OECD average (Table 5). Comparisons over time were restricted to just 3 years (the changes between 2003 and 2006) but the position was broadly unchanged. This could be seen as disappointing since the 2006 cohort will have been more exposed to both the primary and secondary strategies for raising performance.

The court of international comparisons provides fertile territory for those who are prepared to pick selectively over the evidence in support of their cases. The safest conclusion to be drawn, however, is that the international surveys do not as yet provide convincing evidence that England is performing at anything other than the sorts of levels one would expect a relatively well-developed and resourced educational system to produce – a good performance but not yet, perhaps, an outstanding one.

Systemic Thinking and the Search for Powerful Levers

In their study of systemic reform Goertz, Floden, and O’Day (1996) refer to the growth of so-called systemic approaches in the USA during the 1990s. Systemic reforms, they suggested, “embodied three integral components”:

  • “the promotion of ambitious student outcomes for all students;

  • the alignment of policy approaches and the actions of various policy institutions to promote such outcomes; and

  • the restructuring of the public education governance system to support improved achievement.”

The terminology was a little slower to emerge in England than the USA but New Labour’s intentions were clearly similar. At least two distinct phases of systemic thinking can be identified. During the first there was a determined edge to New Labour’s policy-making. The vision came from outside schools themselves. Accountability in the form of inspection loomed large as did national testing, targets and league tables. Neither was school “failure” to be tolerated. The drive for higher standards in literacy and numeracy, exemplified in the National Strategies, was to the fore.

Michael Fullan has always been clear that both “pressure” and “support” are needed if change programmes are to deliver. But he has also stressed that they are needed in equal measure. Broadly speaking, if the driving characteristic of the first phase had been pressure, by the early 2000s some of the rhetoric had mutated into what, retrospectively at least, can be characterised as a greater commitment to support. There was to be a New Relationship With Schools; accountability was to be more “intelligent”; there was to be a greater emphasis on supporting teaching and learning through more “personalised” approaches; and the quest for excellence was to be combined with the pursuit of enjoyment, if only at the primary stages. Importantly, some of the vision for change was to come from schools themselves. The role of school leadership was increasingly cast as one of building capacity. An arguably highly prescriptive and hard-edged vision had given way to a somewhat softer one.

To deliver systemic reform government needs access to a variety of levers on the processes of change. It is beyond the scope of a short review to enumerate the wide range of reforms New Labour initiated, let alone to evaluate them all. I have therefore deliberately confined myself to considering just four developments: the use of external inspection of schools, the National Strategies, the Specialist Schools programme and the Excellence in Cities (EiC) initiative.

These areas have been chosen for three main reasons: First, because they exemplify different facets of New Labour’s reform agenda to improve standards – the development of accountability, the restructuring of the teaching of basic skills, the moulding of new forms of school organisation and the creation of enhanced support for disadvantaged schools; second, because their supporters have consistently maintained that they “worked”; and third, because each, in its different way, when scaled up to the national level, represents a very substantial investment whether it is judged in terms of the consumption of educational energy or of educational finance.

The Effects of Inspection

When New Labour came to power in 1997 Ofsted was still completing its first cycle of inspections. Its motto of “improvement through inspection” fitted the reform agenda and, to the dismay of many teachers and schools, its remit was expanded. In inspection government had discovered a powerful instrument for achieving compliance to its wishes. It could use inspection to shape institutions in its desired image and it was fairly ruthless in doing so. The key question for this analysis, however, is whether inspection pushes up measured results and, somewhat surprisingly, this has turned out to be a matter for debate. Most of the evidence on the effects of inspection was essentially anecdotal; it certainly seemed to confirm inspection “worked” but systematic research was in short supply.

Three independent studies have studied the effects of inspection. They differed in their samples, time scales and the sophistication of methods. The first study, by Cullingford and Daniels (1999), claimed that “Ofsted inspections have the opposite effect to that intended.” They reported that inspected schools fell behind others. However, there were some doubts about the representativeness of their sample. A second, more sophisticated study by Shaw, Newton, Aitkin, and Darnell (2003) had similar difficulty in isolating an “inspection effect”. They found that in comprehensive schools, which made up 90% of the schools in the study, “inspection did not improve exam achievement” although, in the small minority of schools where there was formal selection, there was “a slight improvement”. A third study by Rosenthal (2004) reached similar conclusions. She found “adverse effects on the standards of exam performance achieved by schools in the year of inspection” and noted that “no offsetting later effects for inspection were discernible.”

Eventually, some 12 years after its foundation, Ofsted replied to its critics (Matthews & Sammons, 2004). The results were finely balanced. The analysis spanned the period from 1993 to 2002. It found that in some years (5 out of the 9 years covered) “a higher proportion of inspected schools improved over a 2-year span than all schools” whilst in other years (4 out of the 9) the proportion was lower. A second analysis compared the results of schools that were inspected and not inspected 4 years later. Matthews and Sammons report that “the results indicated that, in general, there was little difference between those schools that were inspected and all schools.” They concluded that their analyses “failed to show any consistent evidence that results spanning the inspection event over the last 8 years are either enhanced or depressed relative to other schools” (2004, p. 37).

These findings did not receive much publicity neither, for that matter, do they appear to have had much influence on Ofsted’s hegemony. But the fact that the agency felt obliged to justify its contribution to schools’ performance in such terms and that it reached such an ambivalent conclusion is of considerable relevance. Given the wide variety of states and circumstances of schools being inspected, it is probably unrealistic to expect some all-embracing “inspection effect”. In many schools an inspection is unlikely to add much to what is already known although it may help to catalyse matters or even, in some instances, galvanise action. But whether it will actually do so depends on many factors beyond the inspectors’ control.

The Impact of the National Strategies

The decision to go nationwide with the National Literacy Strategy (NLS) was made soon after New Labour took over and whilst the pilot was still underway. They had committed themselves to ambitious targets – in due course 80% of primary pupils would be expected to reach Level 4 compared with the 63% that were achieving this when they took over.

In the event the gamble paid off. The pilot showed significant improvements in children’s test scores (Sainsbury, 1998). Nonetheless going to scale was expensive and brought problems. Many primary teachers felt they already understood how to teach reading and that they did not require further assistance; consequently there was some resistance. The National Numeracy Strategy (NNS), on the other hand, did not encounter the same problems. Teachers were generally less confident about their ability to handle maths and some of the “mistakes” made during the implementation of the NLS were avoided.

The team commissioned to evaluate the programmes were enthusiastic about what had been achieved in the early stages. “The Strategies,” they reported, “have had an impressive degree of success, especially given the magnitude of the change envisaged; in many ways they have succeeded in transforming the nature of the country’s primary schools” (Earl et al., 2003, pp. 127–128). Their observations, however, were mainly based on the ways in which classroom practice had been influenced. “It was more difficult,” they felt, “to draw conclusions about the effects of the Strategies on pupil learning than on teaching practice.”

Understanding the contribution of the National Strategies to enhancing pupil performance is problematic. Research on the implementation of educational reforms might lead one to expect that the reform dividends would emerge slowly over time – the major rewards would begin to flow when the changes were fully bedded down. The English experience, however, belies this expectation. The most remarkable thing about the NLS is, perhaps, that most of the changes took place during the early stages of its development; between 1997 and 2000 performance rose from 63 to 75% (see Table 1). Somewhat surprisingly, the first cohort to experience the strategy in its entirety did not improve on this position neither did the second. In fact, across four successive cohorts standards of performance stood still.

The NNS produced equally conflicting results. After initially promising developments, the pace of improvement slowed dramatically; again standards essentially plateaued across four cohorts (2000–2003) before resuming a slower upward trend. In science, meanwhile, results rose from 69% in 1997 to 88% in 2007. Yet this was an area in which there had been no National Strategy at all; teachers had been largely left to their own devices.

As with the GCSE results discussed earlier, possible changes in the performance metrics have made interpretations of trends over time more difficult. A detailed but little-publicised report for the Qualifications and Curriculum Authority, for example, concluded that, “around half of the apparent improvement in national results (between 1996 and 2000) may have arisen from more lenient test standards” (Massey, Green, Dexter, & Hamnett, 2002, p. 224).

Other researchers have also questioned the extent of improvement. Tymms (2004) sought support from the independent Statistics Commission (2005) for validation of his claims that the gains had been overstated. Their conclusion provided some support for both sides. “It has been established,” they concluded, “that the improvement in Key Stage 2 test scores between 1995 and 2000 substantially overstates the improvement in standards in English primary schools over that period” but added that “there was nevertheless some rise in standards.”

Renewing the Comprehensive School

The idea that there should be “diversity and choice” in the educational market place was already firmly established by the mid-1990s. New Labour launched the Specialist Schools programme as an important strategy for revamping the somewhat jaded ideals of the comprehensive school. By 2005 some two-thirds had been given this status. Applicants were expected to raise some external sponsorship and to present a convincing case for being given specialist status in one (or possibly two) areas of the school curriculum. In exchange they were offered additional funding (up to 5% per pupil).

In their 5 Year Strategy the DfES maintained that “specialist schools have improved faster than the average and add(ed) more value for pupils, regardless of their prior attainment” (DfES, 2005, 4: 15). A report by the Specialist School Trust claimed, furthermore, that “the longer that schools are specialist, the greater the specialist school dividend” (Jesson, Crossley, Taylor, & Ware, 2005).

Other researchers have been more skeptical about the extent of this premium. An early analysis by Schagen, Davies, Rudd, and Schagen (2002), for example, found very little edge in favour of specialist schools once differences in intakes were tightly controlled for. This general picture was confirmed in a later analysis by Levacic and Jenkins (2006). They reported, at best, a very small edge for the specialist sector.

Did this performance edge for the sector result from the schools’ new status as specialist institutions, as the government claimed, or from other related factors? Schagen and colleagues have pointed out that early recruits to the programme had to demonstrate that they were already performing at “acceptable levels” in value-added terms. The Select Committee on Education and Skills (2005, para. 11) also drew attention to fact that the schools needed “school management and leadership competencies” in place before they sought specialist status and that they got extra funding as a result. Might not better funding and superior management be responsible for the differences?

The suggestion that the specialist dividend flowed from schools’ established characteristics rather than added to them was underlined, albeit indirectly, by Ofsted (2004, p. 3). They identified a range of pre-existing factors contributing to these schools’ success including “working to declared targets, dynamic leadership by key players, a renewed sense of purpose, the willingness to be a pathfinder, targeted use of funding and being part of an optimistic network of like-minded schools.” Doubts were also expressed about whether the requirement for private sponsorship biased take-up in the direction of schools which had historically commanded parental support. And, related to this was the finding that schools with such strong ‘parental support’ tended to be located in more middle-class areas.

Spending More on the Educationally Disadvantaged

The EiC initiative, in various ways, embodied five key tenets of New Labour’s policy discourse (Power, Whitty, Dickson, Gerwitz, & Halpin, 2003): First, that improved educational provision could combat social disadvantage; second, that good leaders could improve schools in any kind of context; third, that “joined-up problems require joined up solutions” through the development of multi-agency partnerships; fourth, that improvement is best secured by tying resources to outcomes; and fifth, that private sector involvement can help in securing change.

In seeking to tackle the causes of educational disadvantage New Labour faced one of its stiffest challenges. Furthermore, some of its early forays into this field had proved problematic. When Ofsted had looked at Education Action Zones it had provided a fairly cautious endorsement. “Some zones,” they reported, “have made more consistent progress and had a greater impact than others” (Ofsted, 2001, para. 10). And, they added, “they have not often been test-beds for genuinely innovative action. More often, they have offered programmes which enhance or intensify existing action.”

When the EiC programme was launched it represented the largest single investment ever made in tackling educational disadvantage. There were seven major strands. These included programmes to support “gifted and talented” children, the provision of learning mentors; the establishment of Learning Support Units for children with special needs and City Learning Centres to provide ICT resources for groups of schools and their communities; Action Zones which sought to link primary and secondary schools to address local priorities; and an expansion of the existing Specialist and Beacon school programmes.

Opinions varied about whether this menu of activities amounted to a coherent, “joined-up” strategy. When the programme came to be evaluated there were lots of outcomes, mostly worthwhile and predictable but none in themselves very dramatic. The evaluators commented that “most of the teachers and senior managers taking part … were very positive about the policy.” They added tellingly that “although only a minority directly linked EiC with raised attainment, many noted the ways in which EiC was creating a better environment for learning, improving pupils’ motivation and raising their aspirations and contributing to improved teaching and learning, all of which would lead in the longer term to improved levels of attainment” (Kendall et al., 2005, p. 16).

Regrettably, the desired improvements in measured results proved more elusive. Pupil attendance had improved amongst EiC schools, but only by “slightly more than 1 day per pupil.” Some modest gains in pupil performance amongst 14-year-olds in maths received some publicity as did improvements in the proportions of pupils achieving the GCSE hurdles in some of the lowest-achieving schools. But the more the evaluators were able to compare like with like, the more modest the outcomes appeared to be. “Taken together,” they concluded, “these findings do not support the hypothesis that pupils in EiC areas were, overall, making greater progress than those in non-EiC areas” (op. cit., 2005, p. 16). Given the social and educational importance of the EiC agenda, this conclusion was disappointing.

Changing Tack

In the early years of the Blair administration every problem seemed to generate a new solution. Policies of one kind or another flowed from the centre with impressive regularity. Some schools rose to the new challenges and exploited the opportunities but others, lacking a clear sense of their own identities, became embattled. On the surface there was a great deal of change but not all of it took root. The majority of schools committed themselves fairly wholeheartedly to the central agenda of raising measured attainment but even here the more successful found it difficult to keep going for long. Improvement was often rapid but soon tailed off. It was unusual for a school to boost pupils’ attainment for more than 3 years at a time; only a minority managed it a second time over the course of a decade (Mangan, Gray, & Pugh, 2005). Not surprisingly, “sustainability” became the watchword and, to its credit, New Labour learnt from some of the bruises it had received in the battle for educational change.

As it entered its third term of office there were perceptible signs that some of the tougher messages had been absorbed. Crucially, there was a shift away from some of the grander schemes, driven from the centre, towards more localised and contextualised approaches in which, ostensibly, schools were to be given a greater say in how they would direct their energies. As Hopkins (2007, p. 171) has put it, schools need incentives rather than legislation and a greater sense of their own agency if they are to see themselves as test-beds for their own improvement. Central to this revised vision was the realisation that the strongest educational reform is built, both in practice and in theory, institution by institution. How this change of strategy will play out remains, at the time of writing, to be seen.

Fostering educational change is, by its nature, a highly risky enterprise. Governments that commit themselves to ambitious targets must expect, at times, to stumble. Some of New Labour’s policies were successful, others less so; nearly all of them were ambitious but regrettably the returns were rarely as high as the expectations. The readers of this volume will not need much reminding that where changing schools is concerned there are few “easy wins”.