Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Previous chapters explored the influences on the principalship through a lens of performativity and the quantification of education.Footnote 1 A significant theme to emerge from the literature and Departmental strategic documents, as well as from interviews with participants, was the importance or value placed upon school data as a driver for much of the work being undertaken in participants’ schools and their wider shared contexts. Under Queensland’s policy ensemble, principals were explicitly required to lead their schools with a focus on data as a driver for decision-making, and these requirements were reinforced through multiple strategic and Departmental policies and processes. The strategic agenda required principals to ‘know [their] data’ in order to monitor performance and inform practice, and to ‘analyse student data regularly to inform improvement’ (QDET, 2014, p. 2).

Furthermore, the strategic agenda directed principals to other policies and processes that emphasised the use of data, such as the School Performance Assessment Framework (QDET, 2015a), which focused on the importance of school data in monitoring schools on a quadrennial basis. The National School Improvement Tool, the guideline used for these quadrennial reviews, incorporated a heavy emphasis on the use of school performance data to drive school improvement. One of the nine key domains within the tool related to ‘analysis and discussion of data’ (ACER, 2013) and references to the use of data could also be found in three of the other domains. The expectation for principals to be data-literate and data-focused was clear and pervasive in messages from the system.

This chapter explores the place held by data in the principalship and how participants’ practices and beliefs were influenced by discourses of data. An original contribution to the field in this area is the specific empirical focus on the influence of the system-generated School Performance Data Profile in Queensland schools.

Being mindful of Richard’s comments, as a supervisor of principals, that it was ‘clearer that the principal is responsible for the success of every student’, and the knowledge that this success was being measured and quantified in a number of ways, this chapter explores the ways participants used data to guide school improvement; how school data profiles were used in the supervision and development of principals; and how this influenced principals’ relationships with their supervisors. Finally, the value placed upon NAPLAN results will be explored alongside participants’ comments regarding the importance of context and ICSEA scores. The chapter concludes with an analysis of Max, Judy, and Scott’s common approach of focusing on individual ‘learning journeys’ of their students, and how this was reflective of their disciplining (Foucault, 1977) by policies and discourses of the current reform climate. This analysis highlights, in part, the nature of principals’ efforts to cultivate more educative logics (Hardy, 2015b) within a performative culture. Max, Judy, and Scott all wanted their students to achieve positive learning outcomes, even as this had become synonymous with measurable data.

Queensland’s School Performance Data Profile

To contextualise many of the participants’ comments about data, it is first necessary to have a clear understanding of the School Performance Data Profile (also referred to as the ‘data profile’ or ‘the profile’) and the place it held in the case studies. The school performance data referred to in these documents primarily took the form of a school data profile compiled by the School Improvement Unit, a subsection of Queensland’s education department which is primarily responsible for monitoring school performance through the metrics found within the profile, as discussed below. Schools received four updated versions of their profile each year, twice per semester at points aligning with releases of key data such as NAPLAN and School Opinion Survey data. At the time of the case studies, the profile was six pages of multiple representations of data, but in the past has been twice that size. It contains a range of school data such as student achievement data (including NAPLAN, school-based subject achievement, and Closing the Gap data as discussed in Chap. 2); student demographic data (including enrolment, student needs, attendance, and disciplinary absence data); and school management data (including school audit data, school opinion survey results, and financial and facilities-related data). NAPLAN data are presented in comparisons with ‘like’ schools based on MySchools ICSEA score, as well as against all schools in the nation. Other data such as enrolment and attendance data, and school-based assessment data are presented in comparison with all schools state-wide.

A key focus of the profile is multiple representations of NAPLAN data, which is presented in eleven different formats across two and a half pages. Just over 40% of the profile relates to NAPLAN data, providing insight into the importance placed upon NAPLAN as a driver for school performance management and review. Aligning with discourses of transparent data-based accountability espoused by the system, this means that the proliferation of data, and the data profile itself, are key tools in the management and supervision of principals (Bloxham, Ehrich & Iyer, 2015).

The data profile thus becomes a tool of surveillance, acting as the means by which the system monitors and judges the work being undertaken in schools (Foucault, 2000; Gillies, 2013). Principals are expected to use data to inform their practice while ‘delivering extraordinary and sustained improvement and achievement’, according to explicit expectations from the system (QDET, 2014, p. 1). A recurring theme throughout the research undertaken in these case studies was the centrality of the data profile to each principal’s work. Max, Judy and Scott referred to the data profile frequently throughout our interviews. When they discussed comparisons with other schools in this chapter they were referring to state-wide or nation-wide comparisons from the data profile (rather than their statistically similar schools on MySchool) unless explicitly stating otherwise; an important note to contextualise their comments.

Max, Judy, and Scott all placed great emphasis upon the profile and used it to guide their work in a number of ways. In fact, they referred to it as the very measure of school improvement—as principals, their position description emphasised their role as being to improve educational outcomes at their schools . The profile was thus central to the principalship; used as a measure of improvement by the principals, their supervisors and other regional support staff, and other Departmental staff such as the unit tasked with monitoring and improving school performance.

The Influence of System-Generated Data Via the Data Profile

As discussed in previous chapters, principals are explicitly required to have a significant focus on ‘school improvement’ as part of their role description. Previous chapters also established that discourses of school improvement guided participants’ work explicitly through ‘policy as text’ in the form of these documents such as position descriptions and strategic agendas (QDET, 2015a) as well as ‘policy as discourse’ from principals’ supervisors and system leaders (Ball, 1993). The relationship between school improvement and school data is inextricably linked within the systemic documents referred to previously, including the system’s strategic agenda, the School Performance Assessment Framework, and the National School Improvement Tool, which guides regular school review processes. Max, Judy, and Scott were asked to comment upon the ways data and school improvement worked together in their schools.

Max described the importance of school data as a means of keeping the focus on school improvement. He commented that his Head of Curriculum used data to enable the leadership team to monitor school improvement in a variety of ways:

With our Head of Curriculum being as good as she is, she will provide me with regular updates in terms of individual students and where the data showed that they were, and where they are now in terms of raw data – so pre- and post-testing, that’s the very first way we keep track of that. Then there’ll be overall school data sets, so that’s our [system generated] ten-page profile, and that comes out once per semester and lets us know how we’re tracking, so they’re the two main things. We’ll also talk with teachers, we’ll be collecting other data from them to have a look at, and we’ll also be looking at our school report card data in terms of A-Es. So they’re the four main areas.

Here, the variety of data used to monitor school improvement are evident. Each semester or term, according to Max, there were opportunities for new data to drive conversations with teachers. Measurable or quantitative data also held a hallowed place in each of the case study schools as the key measure of improvement and as a way of determining the school’s strategic agenda. This aligns with Lingard and Sellar’s (2013) comments about the ‘naturalisation’ (p. 652) of data as the logical medium through which to consider teaching and learning.

Given that discourses of school improvement are embedded explicitly throughout policy documents and rhetoric from the system, each participant was asked to define school improvement at the outset of our interviews. Responses from Max, Judy, and Scott were very much in alignment. Each principal defined ‘school improvement’ specifically as being measurable by the School Performance Data Profile. The positioning of the data profile as the key measure of improvement can be theorised through Kickert’s (1995) notions of steering at a distance. A significant emphasis is placed on the data profile, through its frequent re-releases with updated data each term, and through the fact that principals’ supervisors use it as the key measure of improvement and tool for supervision (Bloxham et al., 2015), to guide discussions regarding principal performance. The emphasis placed on the data profile—a tool of surveillance (Foucault, 2003) by which principals are monitored and judged—leads to the creation of norms regarding the importance of the work of participants. Norms, suggested Porter (1995), then govern the work of principals from a distance. This was evidenced in the case studies by the participants’ identification of the data profile as their measure of school improvement. Max, in particular, defined this with certainty:

Well we went through that period of defining what school improvement is, or not really knowing what that was. School improvement now, though, is very clearly defined in terms of what they call the ten-page data set – that’s your school’s data profile. And that is everything around attendance, tracking individual students for literacy and numeracy in particular. And things like NAPLAN sit there to be able to provide that sort of data. And school improvement, particularly about student improvement, talks about the actual effect size for individual students. So we’re very clear in our minds around that.

Comments from Scott and Judy reflected this definition as well. Judy discussed the importance of the data profile relating to notions of school improvement, focusing on particular aspects of the data profile:

For us probably as [Education Queensland] employees we’re driven by what they class as school improvement, and that’s around the accountability for improvement of results in things like NAPLAN, it’s driven around that, it really is, and politically and everything it’s all about that […] and that [school data] profile is gold.

Judy commented that ‘they’ (the Department) have classed school improvement as achievement in NAPLAN and on the data profile. This can be viewed with Lyotard’s (1984) notion in mind that in a performative culture, knowledge is a form of government—‘who decides what knowledge is, and who decides what needs to be decided?’ (p. 7). In this case, Judy highlighted that the Department’s decision of what knowledge would be valued then governed participants’ work as principals (Foucault, 1988, 2001, 2007). Scott’s comments reflected the emphasis on the profile as well, particularly when he spoke about the range of data that could be gathered from the profile to measure improvement.

On the other hand, Richard’s definitions of school improvement were similar from his viewpoint as an Assistant Regional Director, but not as clearly aligned as those from Max, Judy, and Scott. Richard commented that ‘wouldn’t it be good if my definition lined up with the principals’?’, and it was not significantly different, but took a bigger picture view of school improvement than simply that which would be measurable by the data profile. Identifying school improvement as a bigger picture of ‘every child [improving] their life chances by reaching their potential at school’, Richard then elaborated that:

broadening that out to a school […] level, that means the improvement is quantifiable in the data, in national testing data […] so you can take that to a school level and say that the data is showing the school has identified its gaps and has addressed those, and that the kids’ performance is improving as a cohort in those areas. Or you can take it to an individual student level and look at the relative gain and see what the movement is.

Richard saw the data profile as a ‘useful document’, but noted that ‘it’s more useful in bigger schools than in smaller schools; it loses its validity and becomes highly volatile with smaller schools. They can go from the penthouse to the outhouse in one year’ with one change of family or new enrolments in a small school changing the demographics significantly.

Reliability issues in NAPLAN should be considered when analysing much of the interview data relating to NAPLAN in this chapter. The gap between evidence from the research and the way NAPLAN was viewed by participants in this book—highly experienced educators—is significant. Richard expressed the view that using ‘national testing data’ could provide a measure of school improvement. In actuality, Wu (2016), a professor of statistics with expertise in large-scale testing and assessment, highlighted the large error margin in NAPLAN’s measure of student ability. She noted that while NAPLAN parent reporting documentation gives an impression that the measure of performance is precise (echoing previously discussed phrasing from Richard and Scott), the measurement error margin is large. In fact, the error margin is so large that it is not possible to ‘locate a student in a particular NAPLAN [reporting] band’ (Wu, 2016, p. 22), and NAPLAN tests therefore only provide a general idea of whether a student is ‘struggling, on track, or performing above average’ (p. 23). In addition, Wu (2009) found that through fluctuations in test scores due to this imprecise measurement alone, a student could show no growth at all, or above-expected growth across two tests.

Using student relative gain on the testing as a measure of school improvement thus becomes problematic, suggesting that even in larger schools this data set might be ‘volatile’ as Richard noted. If this is the discourse surrounding the effectiveness, or potential use for NAPLAN data, it makes sense that the data page was seen as more valuable for Max, Judy, and Scott as principals of ‘bigger’ schools. The trust placed in these numbers (Porter, 1995) was a significant theme recurring throughout interviews with all participants.

Each of the principal participants expressed the sentiment that they were appreciative of the clarity they felt the profile had afforded with reference to providing a compilation of the metrics on which they were being measured and what they were expected to attend to as school leaders. It raises some questions about autonomy in the principalship in an environment where system rhetoric espouses principal autonomy as a key feature of the landscape (Gobby, 2013; Gray, Campbell-Evans, & Leggett, 2013). Systemic documents outline the requirements for principals to use ‘school performance data, contextual information, and the findings from the Teaching and Learning Audit’ to inform their School Plan (each school’s strategic agenda, developed every four years) (QDET, 2015b, p. 1). This was reinforced by Richard who described the place held by the profile in school planning processes. Participants were thus expected via policy as text and policy as discourse to draw their school focus from the data contained within the profile. It could therefore be argued that they were less able to exercise their professional judgment in determining the school’s strategic agenda. Indeed, each of the principals in these case studies observed that their school’s strategic agenda arose directly from the data profile.

In performative cultures, principals’ effectiveness is measured and judged according to externally-imposed targets and benchmarks, with the data profile acting as a physical manifestation of this practice. Principals who are seen to be achieving well, as measured by the data profile, are judged to be quality leaders and can be afforded more freedom and trust (Singh, 2014). This discourse of quality is measured in multiple ways in performative cultures, and the most common measure of quality for principals in these case studies was improvement of their school performance data, as measured within the profile. Not only was this confirmed by Richard’s and Tracy’s descriptions about the ways principals were monitored and deemed to be effective, it was also supported by a recent study (Bloxham et al., 2015) which found that the document is the ‘primary data set and point of reference employed by supervisors when monitoring Queensland public schools’ (p. 357). In a performative culture, where being seen as a quality or effective leader is of great importance (Keddie, 2013; Singh, 2014), improvement in the data measured by the profile thus becomes a key influence on the principalship.

Findings from the previous chapter highlighted the different ways participants responded to these practices of measurement and quantification of their work. Whereas Max and Scott could be seen as more ‘self-disciplined’ by the reforms, having philosophies of achievement and improvement that aligned with these discourses, Judy focused more on how perceptions of her as a quality leader enabled her to do the work she was most passionate about—a holistic focus of education at Merriwald. However, this perception of quality still arose from meeting performative requirements such as having steady or improving school performance data on the Department’s surveillance tools (Foucault, 2003) such as the data profile.

This chapter has thus far established that the data profile played a significant role in helping participants to make decisions about school priorities and where to direct their focus. This is a logical response for principals who have been disciplined by discourses in a performative environment, as the very nature of performativity influences principals as subjects within the system. As Lyotard (1984) commented, no self is an island, and each of us exists in a complex network of relationships and interactions. Principals are shaped by discourses of performativity and explicit expectations, and have no choice but to respond in some way to the culture and the climate. Lyotard discussed the way people are displaced by the messages that traverse them. Each new message (or in this case, each new performative requirement or initiative) repositions the recipient within the shifting environment. What they can control, however, is how they respond.

For example, as previously discussed, in these case studies there was a very explicit expectation that principals would make use of this school performance data to guide their work. To the extent that principals did this, they were working with or against the system as a result. This can be further theorised through Lyotard’s (1984) discussion about language games. When a performative ‘statement’ was made, such as the expectation to work with data, or to use the data profile, principals were affected by the very existence of the expectation, and the environment in which they enacted their work was immediately altered. Comments from Tracy, who worked with principals across the region, indicated that the impact of the proliferation of data has been immense.

Tracy stated that the continued release of data—deemed important enough by the system to warrant four re-releases of the updated profile each year—hampered some principals’ abilities to engage in longer term planning. Instead, Tracy observed that many of the principals she worked with were so focused on addressing the latest ‘thing’ (in her words), that they were working in reactive states, reacting to each new piece of information and changing focus as a result of the updated releases of data (for more, see Heffernan, 2016). Lyotard (1984) discussed the notion of moves and counter moves—in this case, releases of the data profile, and principals’ changed behaviours as a response. He explained that by necessity, moves require counter moves, but that ‘a counter move that is merely reactional is not a good move’ (p. 16). Tracy’s comments, right down to the same phrasing, were reflective of this. She noted that this reactionary response to system initiatives was happening with more frequency, and suggested that over time, she believed many principals were losing the ability to take a strategic ‘helicopter’ view of leadership, highlighted as being vital in implementing effective long-term change for improvement (Lewis & Andrews, 2009).

The data profile, the representation of the school’s performance according to system requirements, became such a major influence on principals’ behaviours that it resulted in what some may argue (based on Tracy’s observations) was a complete alteration of some principals’ abilities to lead in the ways that could lead to the long-term improvement that the system was ostensibly seeking. While Max, Judy, and Scott did not specifically display these reactionary planning responses, it is possible that this was a phenomenon more commonly evident in less experienced principals (who comprised the majority of principals within the region). This would be a logical theorisation of the data, given Maguire, Braun and Ball’s (2015, p. 494) finding that early-career teachers exhibited ‘policy dependency’ as well as higher levels of compliance with policies more often than their experienced colleagues did. With that said, Max, Judy and Scott did draw their focus directly from the profile, as discussed earlier.

The data profile was not the only data set that steered participants’ work from a distance (Kickert, 1995). Max, Judy, and Scott all conveyed (echoed by comments from Richard and Tracy), that the data profile formed part of a ‘bigger’ data picture, which included school-generated data. These additional data provided another means of measuring and surveilling the work being undertaken in their schools. This further illustrates the trust placed in numbers (Porter, 1995) and the enumeration (Hardy, 2015a) of schooling practices.

While all participants emphasised the importance of the school data profile, they focused on school-based data as well, which was reflective of Richard’s comments about the importance of data collected at the school level. Richard referred to ‘school generated’ data at a number of points, suggesting that this formed part of the basis for school improvement.

School-Generated Data to Augment the Data Profile

Supporting this notion of a bigger picture of data beyond those generated by the system, Max remarked that only some of the data profile was relevant for their school’s needs. According to Max, ‘there are only certain elements in there that I pay attention to’, and elaborating, he used the profile to look at bigger picture trends within his school data. For example, attendance was not seen as an issue for the school, so this was a data set that was generally dismissed within the profile. A theme emerged within the case study data that showed Max, Judy, and Scott making use of additional school-generated data to supplement the generic data profile. While each of the principals had a different approach within this wider practice, they all emphasised the importance of school context when working in data-heavy climates. Each principal’s approach to working with their school’s data beyond the ten-page data profile varied, but Max, Judy, and Scott shared key ideas about contextualising the use of data as was relevant to their own school, as a supplement to the data provided by the system. Further, detailed, discussion about the specific data collected by the schools is undertaken later in this chapter.

Principals’ Data Literacy and a Resulting Variety of Practices

There were some similarities in the approaches towards student achievement data in each school, such as the use of the same commercial testing products. Researchers are increasingly studying the commercialisation of education, with the edu-business industry being estimated as worth $48 billion per year in the United States alone (Hogan, 2016). Hogan analysed the partnerships between state and private edu-business in the wake of the post-NAPLAN reforms studied in this project, and found that ACER holds partnerships with the vast majority of Australian education authorities. Hogan highlighted that ACER is providing simplified solutions to policy problems that they themselves have had a hand in identifying. An example of this can be seen with the head of ACER, Geoff Masters, producing a report with recommendations that ‘standard science tests be introduced in Years 4, 6, 8, and 10 for school use’ (Masters, 2009, p. 82) in monitoring student progress. ACER also produces and sells these same science tests. Concerns have been raised about the prevalence of commercial testing solutions, with Hogan (2016) suggesting that the current climate of reforms provides an environment where edu-businesses have influence over policy decisions, particularly in ways that displace experts in education policy. This is an important example in relation to these case studies, because the region’s mandated data collection (part of the Regional Charter of Expectations discussed in Chap. 4) included commercial products from edu-businesses such as ACER. The region also created a policy document specifying targets and benchmarks. Testing data from these commercial products were forwarded to regional staff at the end of each year for region-wide analysis and monitoring. This process thus served as another tool of surveillance (Foucault, 2003) for schools and principals.

This mandated adoption of commercial testing products was interesting to note in light of Max’s comments that regional data targets were not a significant consideration for him in his work as principal at Ironcliff. Their personal responses of resistance or compliance with these discourses notwithstanding, the key similarity recurring between Max, Judy, and Scott was their emphasis on data and the frequency with which the notion of data arose in interviews. Each principal discussed being able to collect, analyse, and use data to draw conclusions about student learning. In addition, all participants discussed the ways they worked with staff in relation to data. Max described the emphasis placed on data at staff meetings, commenting that ‘we spend an inordinate amount of time in staff meetings looking at what the data is’. He indicated that the staff at Ironcliff would ‘drill down’ into data to find the stories or reasons behind what may be perceived as anomalies (or ‘blips’, as he called them). For example, upon receiving annual School Opinion Survey data, Max saw that staff morale was particularly low and asked himself ‘is there anything here I need to drill down on?’. The result was apparently due to his ‘pushing’ a new style of in-depth face to face parent reporting that teachers did not feel comfortable with. This was one example provided of how principal participants searched for the stories within the data to explain trends or unexpected changes, but also of how prevalent measurement was in schools in these case studies. It was not just limited to student achievement, but also encompassed social climate or morale.

Lyotard’s (1984) comments about scientific knowledge (school data) not representing the ‘totality of knowledge’ (p. 7) are reflected in Max’s approach of finding the narrative within the data; a recurring theme with all case study participants. Similar comments were made by Judy and Scott in relation to analysing data at a school level and finding the narrative or ‘story’ behind the data. Judy discussed regular planning meetings and leadership team meetings where data were presented and discussed, and used to inform future directions and planning. Finally, Scott described a staff-wide focus on the deeper analysis of data at Mount Pleasant, where they tried to identify trends and areas for further focus, while seeking to understand the reasons for these trends.

In contrast, due to a key focus of her role as being working with data at a regional level and supporting principals to work with data at a school level, Tracy noted that a deep understanding of how to work with data was a major challenge facing the region’s principals. She expressed her concern that principals were not data literate enough to be able to deeply analyse or interrogate data and make informed decisions to result in significant improvement in the data valued by the system (evident in their inclusion in the school’s data profile). After analysing data at a regional level and surveying 300 teachers in the region, she captured trends relating to schools’ work with data:

Here are the trends: at a leadership perspective, they – after the event – they look at the growth and their relative gain. […] They look at the NAPLAN and Progressive Achievement Test [commercial literacy and numeracy testing used within the region] data at staff meetings and say ‘That’s interesting, okay’. And they move on. That’s it. At a leadership level for principals, they’re then looking at the big three or the dirty dozenFootnote 2, but they’re not unpacking anything else. The data is only informing intervention programs and they get a spike.

Tracy suggested that ‘the collecting [of data] is happening, the interrogation isn’t happening. They’re focused on “are we growing or not?”’. She commented that data literacy (Bruniges, 2012) was not something that principals have been taught in depth and is another system assumption influencing current pressures on the principalship, which was impeding long-lasting or significant improvement in schools. This was echoed by Klenowski (2016) who commented that leaders have had limited training in data analysis and interpretation. Data literacy was also raised as an issue requiring attention in recommendations from the Teacher Education Ministerial Advisory Group for incoming pre-service teachers (TEMAG, 2014). With that said, however, the principals in these case studies expressed confidence in their use of data, indicating a contradiction between either their perceived abilities or the requirements for principals’ work in this area. This raises questions about data literacy at a wider level with the discussion earlier in this chapter about the focus on NAPLAN as a measure of school improvement, given the evidence (Wu, 2016) that it is not a reliable measure of student achievement in the specific ways it was being positioned within regional discourses.

Max, Judy, and Scott all acknowledged the complex nature of analysing and using data effectively, with Max commenting that ‘you virtually need a Ph.D.’ to understand some of the data with which schools were being presented by the Department. However, for the purposes for which they were using data at a school level, all of the principal participants indicated they were confident in their knowledge and skills in the use of data to drive school improvement. This may be translated into a confidence of being able to do what the system was asking from them in terms of data-driven supervision of their own work by ARDs.

Data as a Tool for Surveillance and Supervision

Perhaps one of the areas where a culture of performativity was most evident was within the responses from participants about how data influenced their relationships with the system, when it came to supervision and capability development. Richard discussed the use of school data profiles as a focus for discussions with principals and the way it determined the direction for working with principals. He described the influence of data on these decisions:

This week for instance, we got a release of this year’s school opinion survey data, so that’s an opportunity for us in our conversations with principals. That’s a fresh data set that opens up new discussion – what’s it saying, what are the gaps, what do you see in it, what do I see in it – and then in September we had the NAPLAN data released, so my conversations with principals are generally on a timetable of one school visit per term.

Richard went on to note that each visit comprised of a data conversation, depending on the school or system generated data that had been released or obtained since his last visit. A common theme arising from interviews with all participants was that the increase of availability of different types of data over recent years had resulted in more precision in terms of supervision from ARDs. When I asked Richard if he believed he had a clearer picture of what was happening in schools than he might have ten years earlier, due to the plethora of data now available for monitoring schools, he responded that ‘there’s a greater degree of precision now’. This was also reflected in comments from Max, Judy, and Scott, and is indicative of Porter’s (1995) notions of trusting in numbers and the detailed insights they can ostensibly give about the complex work undertaken in schools.

Max commented upon the shift in ways of working with supervisors that he had seen since his time in senior roles, remarking that the availability of data had increased and as a result, conversations were more sharply focused on trends within the data than they might have been in the past. Previous chapters discussed the shift from school management to instructional leadership. One of the shifts identified by all participants was the change from keeping control of the school (where the region had focused on School Opinion Surveys as a gauge of effective leadership), into a more targeted focus on measurable student outcomes. Judy described this as a demand on principals to understand their data and be able to speak to their data profiles. This can be theorised through Foucault’s (2003) notions of surveillance, with the participants’ school data being one of the main ways principals were measured and judged. The balance of Lyotard’s (1984) notions of science (data) and narrative (being able to ‘speak to it’) were also evident. Judy explained:

You’ve really got to be able to voice it and be articulate about what you’re doing whereas before it was a general thing like, ‘We haven’t had any complaints about you and that’s great, and your parents are all good’, but now it’s not like that at all. It’s drilling into deeper things.

Interestingly, as I discussed with Tracy, this viewpoint of the shift to accountability-based environments discounts the types of data and accountabilities that existed pre-NAPLAN, such as Year 2 Net data (an annual collection of continua-based tracking for Year 2 students) and Queensland’s own Year 3, 5, and 7 testing data (these were similar tests to NAPLAN, but were not able to be generalised across the nation). The fact that no participants aside from Tracy commented upon this as a way of quantifying or judging the work being undertaken by principals and schools in the past spoke further to the high-stakes nature of NAPLAN testing, and the level of influence it has had on the culture of the system and on how participants viewed their priorities within their conceptualisations of the principalship as currently constituted. This echoes Gorur’s (2016, p. 35) suggestion that previous ways of knowing a school had been surpassed by the ‘mine of information’ provided by NAPLAN and MySchool.

Scott’s discussion about the shift in supervisory practices as a result of the increased accountability surveillance (Lingard & Sellar, 2013) directly echoed Richard’s comments about his supervisory practices and also aligned with the notions of trusting in numbers, or numbers and data being able to give a clear picture of complex work (Porter, 1995). He reflected:

The biggest difference at the moment is [ARDs] come along and they don’t want to talk to you about your School Opinion Surveys – they talk to you now about a kid they identified in Year 5 who didn’t move as far as the other kids between year 3-5 in inferential questioning. So as they’ve got very - incredibly – precise in their agenda, that’s forced us to get precise. And there’s nothing wrong with that, that’s where we should be working. So I think they’re good moves.

Scott’s comments about precision were interesting to note given Wu’s (2016) findings about the error margins and student achievement measures in NAPLAN. This is reflective of findings from earlier in this chapter about the accuracy of discourses surrounding NAPLAN. It also reflects some practices and affordances (Thompson & Mockler, 2016) of the testing and data analysis processes within the region. When information regarding the reliability of NAPLAN results did not form part of the discourse, Scott’s comments about precision were logical. This comment from Scott also highlighted the haziness between the performative and the educative and how closely intertwined they may appear to have become for some principals. By indicating that ‘they’re good moves’, educative logics (Hardy, 2015a), and Scott’s educative disposition may have seemed at play, but whether this was actually the case, given the conditions within which principals worked, is a moot point. Focusing on student outcomes and specific students was seen as a positive thing, and disciplining (Foucault, 1977) from performative influence is evident here because it was framed within discussions with supervisors where measurement and metrics were a key part of the discussion.

Nevertheless, perhaps in part, a ‘logic of appropriation’ (Hardy, 2014) was evident. Scott’s claim that the performative shift towards data-driven leadership was a ‘good move’ because it resulted in a deeper, targeted focus on student learning is reflective of performative discourses being appropriated for educative purposes. This also reflects Thompson and Mockler’s (2016) notions of principals finding affordances within the climate of audit and testing.

The supervisory nature of working with data was an area where Max, Judy, and Scott felt confident in their approaches because they had all been judged externally as being quality, or effective, principals. Therefore, Max, Judy, and Scott appeared to feel less pressure from external sources than Tracy described seeing in other principals within the region. Max, Judy, and Scott each noted that because their data were stable or trending upwards, they were left more to their own devices than a principal might be who was struggling or experiencing difficulties in leading measurable improvement in their school, aligning with findings from Singh (2014). This has been discussed in previous chapters but is worth noting here again pertaining to data-specific supervision. A comment from Max exemplified those from all participants when he noted:

If your school performance is showing signs that it is trending upwards – long-term trending upwards – then it leaves very little room for anybody to start coming in and imposing their rules. If your data is trending downwards, whether it’s School Opinion Surveys or NAPLAN or anything else, then you really don’t have too much of a leg to stand on in terms of people coming into say, ‘The Department wants you to do this or that’. But if you’re showing that you’re successful…

The trust in numbers (Porter, 1995) and the proliferation of data surveillance (Foucault, 2003; Lingard & Sellar, 2013) could provide supervisors with the confidence to judge principals’ work from a distance when the data reflected systemic targets in areas of focus. Richard confirmed this, when he described differentiated levels of support for principals based on their school’s data:

Schools that are flying need less supervision than schools that are struggling, but the supervision might be quite different. It might be more collegial, less frequent, less intrusive. At the other end of the scale, inexperienced principals, or principals who have been unable to bring around improvement, will naturally attract more support – more attention, more capability development, and more intervention if you like.

A comment from Scott that exemplified the culture of quantification of education was discussed earlier and also applies here in relation to how data guided supervision of principals—‘when it comes down to it, [results are] all they want to see. They’re not interested in how I’m doing it, they just want to see this number of kids above this certain point, so it’s a very numerical system’. According to these performative approaches, if principals were meeting system benchmarks and targets they were judged to be effective and afforded the freedom to continue working as they saw fit. Here, notions of steering from a distance (Kickert, 1995) and the reconstruction or re-framing of practices around ways of working (Lingard & Sellar, 2013) as desired by the system were evident. Principals were left to ‘get on with it’, as Max said—provided that this resulted in desirable outcomes for the system as a whole. With NAPLAN forming the majority of the data profile, it held a significant place in discussions about how data governed the work of schools.

The Place of NAPLAN in a Data-Heavy Landscape

Participants expressed a range of opinions pertaining to NAPLAN and its importance in an ever-expanding data landscape. As discussed in previous chapters, all participants (including Richard and Tracy) pinpointed NAPLAN as the catalyst for the changed landscape of accountabilities and school improvement in Queensland. However, differences were apparent in participants’ attitudes towards NAPLAN, as well as the emphasis placed upon it by each person.

Seemingly feeling the most pressure corresponding to NAPLAN was Judy, who placed it at the forefront of the system’s definition of school improvement. Judy discussed the place it held in the landscape and the tension she felt between what the system expected and what her school tried to do in relation to addressing NAPLAN, commenting that the pressure sometimes arose from politicisation and media focus on the testing rather than from the system itself:

It’s not the be all and end all of everything, and everything doesn’t rotate around NAPLAN [in our school]. But we’re somewhat driven by that, and the Department does drive you by that, but sometimes they don’t even want to do that but it’s by political parties.

Judy noted the pressures she felt as principal in relation to the public interest in NAPLAN, primarily due to the media’s interest in the testing, as I have previously discussed . When asked how much pressure she felt about NAPLAN from external sources, she replied:

Oh yeah, a lot. A lot, because I mean the media – oh boy, they’ve released the results last week and so straight away they’re in the Courier Mail and you’re like ‘whoa’. A lot of pressure. But then you’ve got to convince your community to say we are doing really well in that and – overall we don’t, we’ve got some red there, we’ve got some orange which is great [laughs] – we aren’t all red […], we’ve got a couple of green […] but you know, we don’t have [green] overall. But when you tell parents […], in the end, lots of them are only interested in their child, and their growth.

This notion of parents being interested in their own children rather than the bigger picture of school data was an important one, and will be discussed in detail at the end of this chapter as part of a strong theme that emerged from interview data relating to participants focusing on individual ‘learning journeys’ for students. Judy’s focus on ‘convincing’ her community and providing the narrative to accompany the data can, again, be theorised through Lyotard’s (1984) notions of the types of knowledge (scientific or narrative) and how the two might co-exist.

Richard made comments about NAPLAN that echoed the seemingly hallowed place it held in the data landscape for schools, particularly the political nature of the testing. In these comments, he recognised the place it did hold, as well as describing the place he believed it should have held:

I think that there is, shall I say, an unholy emphasis on national testing data. But it’s the only national data that we have, and because it’s of great importance to government, therefore it comes down the line and is of importance to us Departmentally, and regionally, and at a school level. But it’s not the only important data – there’s a range of other data that schools collect, and it’s equally important or more important. So NAPLAN is lag data – it’s telling us what happened in the past. We’re encouraging schools to collect real-time data, that tells us what is happening now with kids, and respond to that with agility.

The ‘real time data’ Richard referred to here was the short-cycle data collection Max was strongly against in Chap. 6, so there is still a tension between what the region expected and what principals would implement. By positioning NAPLAN as part of the regional data landscape, Richard’s pragmatism about NAPLAN testing was evident. He went on to elaborate:

Michael Fullan talks about drivers. And he talks about the right or wrong drivers, but he also says the reality is that there’ll be some wrong drivers that are foisted upon us and we can’t deny that they’re there […] and we need to make the best use we can of them, and at the same time put our energy into the right drivers, as much as we can […] but the annual national testing is a reality and we deal with it and understand its place.

This attitude is reflective of literature reviewed in Chap. 2 surrounding discourses of accountability, autonomy, and leadership which discussed the notion that successful principals understand and acknowledge the limitations of the system (in this case, NAPLAN being used as a driver for their work), and find ways to work around it (Adamowski, Bowles Therriault, & Cavanna, 2007). Richard, representing regional discourses, accepted that NAPLAN had been ‘foisted upon’ schools and he encouraged for it to be positioned as part of a bigger picture. This is reminiscent of Lyotard’s (1984) discussion of the power that can be found in language games. The message passed through Richard that NAPLAN is a ‘driver’, but he positioned himself and his schools in a more powerful way when he responded by encouraging alternative ways of viewing the data.

Richard’s comments about NAPLAN being part of a wider data landscape were reflected in the work Max undertook as principal at Ironcliff. He described a range of data being collected at the school, with NAPLAN constituting just one aspect of this. Max was very matter-of-fact when asked if he felt external pressures pertaining to NAPLAN, responding simply ‘I don’t, no’. When asked to elaborate on his thoughts on this, he commented:

Without being flippant and dismissive, we know that there’s the NAPLAN focus and that’s something that causes angst every year, but we all know it’s one point in time of data that sits in our beaker, and it’s the beaker that matters, that we’re talking about at any given time with parents.

This ‘beaker’ approach is again demonstrative of earlier theorising regarding the balance of narrative and scientific knowledge (Lyotard, 1984). When Max referred to what the school ‘is talking about at any given time’, he was commenting on how the narrative of achievement was shaped by a range of ‘objective’ data. The notion of NAPLAN as part of a bigger picture of data was reinforced by all participants. Even when Judy felt more external pressures around NAPLAN, she addressed it as part of the larger picture of data within her school, commenting on the snapshot, single point in time nature of NAPLAN. This bigger picture approach for participants involved emphasising the context of their schools and how this context influenced performance on narrow measures of educational achievement, such as NAPLAN.

Participants’ Use of ICSEA to Frame School Data

All participants particularly emphasised the importance of ICSEA, the score of socio-educational advantage initially described in Chap. 4’s introduction of participants’ contexts. To contextualise these comments, it should be noted that principals were referring to the colouring on the data profiles at some times, and to the MySchool website at other times. They made reference to comparisons enabled against ‘all’ schools (rather than ‘like’ or similar schools) in both of these tools, but they returned to their ICSEA score to contextualise their school data. I contend that due to their ‘ownership’ over their school data, principals were more keenly aware—and perhaps more vocal about—the potential impact of outside influences on results. Scott was the most emphatic regarding notions of fairness and equity in terms of the impact of each school’s ICSEA score on their data. He recalled the interactions he had with Richard and some other regional support staff to better understand the impact of ICSEA on NAPLAN data and described a formula that the region developed to provide a filter over NAPLAN data that essentially offset the ICSEA score and would alter the overall picture of ‘reds’ and ‘greens’ when compared to the state or the nation. Participants commonly spoke in colours rather than numbers or bands, speaking perhaps to the effects of the simplification of complex data in these discourses.

As a result of viewing Mount Pleasant’s data through this offset filter, Scott maintained that his school was performing well against non-‘like’ schools in light of their ICSEA score. He noted that schools with the highest ICSEA scores in the region were receiving similar NAPLAN results:

So my conversation with Richard is that these blokes [at the highest ICSEA rated school in the region] should be doing twice as good as us. Those numbers should be twice as good. So don’t come down to the likes of [schools with particularly low ICSEA scores] and say ‘you’re in the red’. We’ve got an ICSEA percentile of 20, theirs is 80, and they’re only better than us in one area. We’re better than them in one!

By ‘one area’, Scott was referring to one measure of NAPLAN, for example Year 3 reading. Scott’s frustration at what he perceived to be an inequity of cultures where surveillance of principals and teachers (Foucault, 2003) was at unprecedented levels (Lingard & Sellar, 2013) was evident. He elaborated further on challenges faced by schools with particularly low ICSEA scores who were struggling to meet national means:

If you’re sitting at [remote school] on a scale score of 4, which means that 96% of schools in Australia are better off than you, it is impossible to get there. Impossible. And the demoralising thing there is – if you put that filter over for the disadvantaged schools, you must do it for the green leafy schools here. They have to have it added on. And it’s got to be relative. Just because you’ve got a cushy job at [very advantaged school in the region], turn up at 8:30, swan around, got good kids, have it easy, go home… they should be as accountable as we are for this stuff and they should be pushed hard.

Relating to Scott’s work at his own school, the filter was also important for Mount Pleasant:

I want all of my kids above the national mean score. BUT I want to be able to put another lens over it to say, ‘Okay, let’s put the ICSEA thing on top and just take a moment to realise that if we take off [the formula], where does that put us?’ It actually puts us where the colours should be. So we had nothing to do with the formula itself, we were just pushing [to the region] that the ICSEA score has to be considered [when comparing non-‘like’ schools]. […] If the formula was applied, we’d look excellent […] and we want to be able to say to parents that sort of stuff too – there are elements that look bad, but let’s look at it from a different point of view and see, some of this stuff is working. If we had all kids coming in fully supported, access to medical, specialists, high literacy levels, we could be there.

This seemed to be presented as a more nuanced way of looking at their data and taking individual contexts into account. Judy and Max were in agreement about the challenges of the local context and ICSEA score (and its associated implications) influencing NAPLAN results—which, it is important to remember in terms of performative cultures, were the publicly published and reported upon measures of ‘quality’ that schools were most commonly judged by, as ‘perpetually assessable subjects’ (Niesche, 2015, p. 138).

Judy expressed her belief, as discussed earlier in this chapter, that Merriwald’s ICSEA score had implications for their potential performance on NAPLAN, commenting that ‘we don’t get lots of green [when compared to national averages] and we probably never will because of our ICSEA and because of our demographics’. She also explained that when tracking individual distance travelled for students and disregarding the notion of ‘greens and reds’, her school did a better job of improvement in NAPLAN for students than many of the ‘leafy green schools who do actually get lots of green’. This particular discussion focused on comparison with all schools through the school data profiles as well as MySchool, not just with statistically similar schools on MySchool.

Similarly, Max commented that you ‘build all of that’, referring to ICSEA impact upon NAPLAN data into discussions about school improvement as well. He did note, however, that ICSEA data simply confirmed what he already knew about the school’s changing demographics and data:

Well the ICSEA data just sort of fits in with looking at it and saying, ‘Okay, uh huh, that’s about where we’re at’ – I don’t need ICSEA data to tell me we don’t have the Mercedes and BMWs dropping kids off at school anymore, I can see that for myself. We had police out the front with the speed gun booking people left right and centre but also lots of unroadworthy or unregistered cars – that would never have happened in the past. So that’s a reality check – do I need ICSEA to tell me that? No.

Max did not think that regional staff such as ARDs and support officers were particularly interested in the impact of ICSEA scores on their data because they were more focused on trends than on aspects such as the ‘reds and greens’. I would suggest that this is possibly because they had already applied the aforementioned filter to these data, and focused instead on the aspects that Scott and Judy mentioned were more easily tracked such as long-term trends and individual growth.

This ongoing discussion about ICSEA is another example of the trust placed in numbers in this climate (Porter, 1995). The ICSEA score was embraced by participants and served as an ‘objective’ way of measuring their school against others, an incredibly complex notion in theory. This example also serves as another demonstration of the importance of scientific knowledge (Lyotard, 1984). In this case, the ICSEA score served almost as a narrative in itself to contextualise the school in the wider performative landscape. Principals expressed frustration at being compared to schools with a different ICSEA score, and they referred to this more than to the ‘like’ schools for which ICESA was designed. This may be because performative cultures can encourage comparison, and for schools to aim ‘to be better than others’ (Keddie, 2013).

Perhaps as a result of participants’ beliefs in the inequities of comparing schools, each of the participants shared an approach that emerged through the analysis of interview data as a major theme—tracking individual student ‘learning journeys’ to identify success.

Common Approaches to Student Data: Narratives of Individual Student Journeys

Each principal emphasised their approach of sharpening their focus about data down to individual students. Max described a ‘big picture’ view of student data at Ironcliff with the school’s ‘beaker model’ approach. In this approach, each student had an individual data profile which tracked a variety of data, including NAPLAN, regionally-mandated commercial testing data (such as PAT-R and PAT-M testingFootnote 3), as well as ‘a whole range of school-based data [more commercial products such as CARS&STARS, and Brigance testing of early years students] and all of that stuff goes into the mighty beaker’. Each student received a beaker of their own, ‘and we simply say “what do we need to do for that kid to get them from there to there?” and we can demonstrate that’. The beaker was emphasised as providing a bigger picture of student learning, with Max commenting that ‘it’s the beaker that matters, that we’re talking about with our parents at any given time’.

One of Scott’s reasons for tracking individual growth was pragmatic. As Mount Pleasant’s data improved, so did that of the state and nation, which Scott suggested made it difficult to clearly measure improvement and shifting the goalposts in terms of benchmarks and targets. He described these goalposts as moving because ‘every time our average goes up, so do all the averages […] so it’s really difficult to judge yourself because if all of Queensland rises, the average goes up too’. As a result, their school moved to tracking individual students:

I’m hoping to be able to track individual improvement [on data beyond NAPLAN] and then look at school improvement. […] I’m more interested in micro-tracking every kid, monitoring, tracking, monitoring, tracking, planning, and then over six-month intervals stand back and look at those averages and say, ‘Okay, our average was there and now it’s here’ and compare it to just ourselves.

Highlighting the importance of the narrative to support the science (Lyotard, 1984), part of this approach involved conversations with parents. Scott believed he would be telling parents a more positive picture about their child’s education by tracking individual students, as well as providing parents with a clearer picture of their child’s learning journey:

The most important thing for our kids is that you’ve got those high expectations and know where they are. If you’re talking to mum and dad, it’s about progression – you need to know where they are and where you want them to go. If you can tell them that story about progression, you’re golden. If you’re not at benchmarks and you don’t even know where these kids are or what they’ve done, you’re stuffed.

Judy had similar reasons for tracking individual students which also took conversations with parents and carers into account. At the same time they encompassed her own conceptualisation of her role as focusing on holistic education for individual students and valuing their individual ‘learning journeys’. Judy regularly used the term ‘journey’ when talking about students’ education, providing insights into her beliefs about the importance of long-term pastoral care of students across their school career. Also illustrative of the power of the narrative in coexisting with the scientific (Lyotard, 1984), she noted that when discussing students’ journeys with parents, it is ‘very important’ to have a measure of distance travelled because ‘you can hang on to that and say “here’s what sits behind the picture for our school, and we are doing really well individually. […] Those kids have moved, and they have actually shown improvement”’ . She commented that when working with parents, ‘really, in the end, lots of them are only interested in their child and their growth’. As a result, the school’s approach is to focus on individual student ‘learning journeys’:

We give them lots of communication [about data and student ‘learning journeys’] and I think that reflects in our School Opinion Surveys because when you go back and look at that, those parts we get 100% for repeatedly, all the time, for staff, parents, and students is all about ‘Do we give a good education at this school?’ and yes, we do. And there’s 100% of people believing in you there, so they will go on that journey with you. And they’re not questioning the overall picture too much, they question the individual thing, and they see that.

The quantification of work in schools was evident here even as Judy spoke about the importance of the narrative of numbers. Numbers in the form of school surveys provided a measurable, objective (Porter, 1995) picture of something as complex as community satisfaction with a school.

Parallels could be seen in the approaches adopted by Max, Judy, and Scott in how they addressed discourses of data as a ‘given’ within the performative system. They made use of narrative knowledge (Lyotard, 1984) and contextualised the data within their schools. Rather than pushing back against the quantification and measurement rife in the education landscape, there was evidence of collecting additional data to provide information to support this narrative, and even using data to judge their own success in this endeavour.

Conclusion

Gillies (2013) raised the question of what it takes for a principal to be valorised within the current discourses of educational leadership, management and administration. This chapter goes some way to answering this question. It does appear that the most direct way for principals to be valorised in this particular case study climate was for their school to perform well on system-defined achievement metrics. Participants wanted to improve educational outcomes for their students, but the policy conditions within which they were working, and the discourses shaping educational leadership, impacted upon the ways they were focusing on these outcomes and the key areas being measured and targeted.

The chapter highlighted how these educative goals have become inextricably linked with measurement and data. Participants wanted to succeed and to be seen as quality or effective leaders (valorised, in this sense) and given more freedoms as a result. Success has essentially been reduced to performing well on the variety of testing and assessment or diagnostic tools they had at their disposal as compiled, in particular, within the school performance data profile. The development and the means in which this profile was used has changed the nature of the principalship. When principals pointed to the data profile as their key measure of school improvement—which, in turn, is one of the fundamental requirements of the principalship in Queensland—the impact of the profile became more evident in terms of principals’ leadership practices and the implications for long-term school planning and leadership. If the data profile continues to drive principals’ leadership practices and the school’s agenda so closely, schools may well become stuck in an infinite loop of changing focus with each release of the profile. This reduces the opportunities for strategic long-term planning that meet deeper needs of students. Quick fixes, by their nature, tend to be more superficial and address surface needs at best. They are prevalent in these types of conditions, as suggested by Keddie and Lingard (2015) and reinforced by Tracy’s comments throughout this chapter.

As established early in this book and reinforced throughout this chapter, discourses of quality surrounded the principalship and, measured by the data profile, were reinforced by responses from the system in relation to supervision and development as a direct result of improvement in the profile. This therefore placed the profile itself at the height of importance in the data landscape for principals, though findings within this chapter did indicate that principals spoke about a ‘bigger picture’ of data. While principals were somewhat dismissive about NAPLAN at times, they did return to the emphasis placed on it by the system, the community, and again, the affordances this type of data offered them. The hallowed place of NAPLAN data in the profile also explicitly indicated the importance of NAPLAN data in the case study principals’ schools and the wider system.

With concerns being raised by Tracy about principals’ data literacy, where in a ‘data driven world’ the focus tends to be on reacting rather than being proactive, there appeared to be some work to do in this area to better understand the ways principals perceived responses to data from supervisors and system policies (both policies as text, and policies as discourse). In relation to the productive aspects of data in schools today, participants expressed appreciation of the data and the clarity and precision they afforded them as school leaders. This echoes research from Thompson and Mockler (2016) about the affordances offered by data as well as notions relating to the trust placed in numbers as giving weight to decisions or guiding decisions when the decision-maker may not be as powerful as they are perceived to be (Porter, 1995). It therefore gives gravitas and meaning to leadership decisions and makes them easier to justify. There is an intricate amalgamation between principals’ embrace of the data and affordances they provide, and the notion of steering at a distance that will take some exploration to unpack further in future research. The impact of this for participants’ leadership practices is the same at the end of the day—data holds an esteemed place in the policy and discourse landscape.

Part of the discourse relating to data in the case studies, as explored within this chapter, was that principals owned their data, and were considered to be responsible for their school data. As a result of this, the external influences on data became more pronounced for these school leaders. Each of the principals in this book spoke emphatically about the impact of ICSEA and what it meant for their school data. Some participants went on to comment that it is unfair to judge their schools, or more disadvantaged schools than their own, against the scores of schools with students from advantaged areas with few diverse learning needs. One response for principals in this study was to focus more on individual progression for students rather than the oversimplified ‘data for dummies’ coloured banding presented on MySchool and in the data profile. This is problematic when considering the measurement and reliability errors inherent in NAPLAN.

Indeed, Tracy suggested that these approaches of focusing on individual student learning may have been due to a lack of data literacy and the fact that principals perceived it as being easier to track distance travelled for individual students than to examine more complex data sets across cohorts and years, although Wu (2016) cautioned against this practice as well. However, I would suggest a different theory behind the reasoning for this approach. If principals owned their school’s data, as indicated by Richard, and the influence of ICSEA was so significant on data valued by the system, measuring distance travelled for each student was an effective means of seeing success—not only for students, but for themselves as educators and principals as well, providing ‘moments of quality’, as theorised by Ball (2003) and elaborated upon by Keddie (2013).

Here, performative notions of ‘quality’ can be seen as principals showing growth in measurable student outcomes. The influence of data as a construct influencing principal participants’ work is undeniable when examining interview data. Their approach of focusing on individual students enabled principal participants to take control of the narrative by focusing on positive learning stories, and it spoke to a bigger picture than provided through systemic data such as NAPLAN. This was a way of ensuring they enacted their own conceptualisations of the principalship. Scott was able to guide his work with teachers and direct his energies where he saw fit, as he led from behind. Judy was able to take a holistic view towards students, and Max was able to quantify and measure student outcomes effectively, ensuring that he enacted his vision of improving measurable student outcomes.

Within this chapter, the reconstruction of educational practices around system priorities (Lingard & Sellar, 2013) was evident in the way data, targets, benchmarks, and accountabilities influenced supervision and capability development of principals; school agendas; and the work being undertaken by Max, Judy, and Scott as they enacted their individual conceptualisations of the principalship within current policy ensembles and the landscape of leadership discourses.