Background

There is an abundance of data being generated about students and schools, and supplied to principals, teachers, and parents. In recent years, Australia’s most notable example arises from the nationwide testing conducted in the National Assessment Program-Literacy and Numeracy (NAPLAN) (Ministerial Council for Education, Early Childhood Development and Youth Affairs (MCEECDYA) 2009). Such data have the potential to be useful for informing schools, teachers, and parents about school and student performance. The government’s claim is that:

Literacy and numeracy assessments provide rich data about individual student performance and assist teachers to plan learning activities for students. They also enable schools to develop a more objective view about the performance of their students compared to those in other schools and in relation to state-wide standards. (Ministerial Council on Education, Employment, Training and Youth Affairs, n.d., p. 1)

The collection of data through high stakes regional or national testing, especially in the areas of literacy and numeracy, has become an established but sometimes controversial practice in many countries, notably the United States where it has been a key platform of the ‘No Child Left Behind’ policy. Teachers and researchers have questioned the validity of the test items and testing procedures but, further, they have questioned the impact that the publication of these test results has on teaching and learning (see, for example, Jones and Egley 2007; Nichols and Berliner 2008). It is not the purpose of this paper to examine either test validity or the impact of external tests on classroom practice, but rather to consider whether teachers are able and willing to engage with the data from national testing.

Matthews et al. (2007), writing from their Georgia, United States experience, emphasise that ‘to use data, teachers must accept the data, know what the numbers indicate, and be ready to change their instruction’. An Organization for Economic Cooperation and Development (2004) report on the improvement of education in Chile discussed the introduction of national testing in areas like literacy and numeracy. There, it was noted that 40% of teachers surveyed thought that the reports they received were either not very important or not important at all. One of the issues identified was that constructive use seemed to be restricted by teachers’ lack of capacity to interpret the reported data. The more sophisticated the methodologies used by those reporting the data, the less understandable it was for teachers.

In Australia the supply to schools of data reports based on students’ results in national tests appears to be built on an assumption that those who receive such reports have the capacity—in terms of knowledge about statistical measures, terms, and types of representations—to interpret them effectively. There is some evidence that the reality may be different. Loudy and Wildy (2001), in their early report on the work of the Western Australian Data Club (established by the Department of Education to support school leaders in making performance judgements based on their school’s Western Australian Literacy and Numeracy Assessment (WALNA) data) note that the key reason principals were not making use of the WALNA data was ‘because they did not know exactly what the data meant’ (p. 7). Principals also commented on the need for teachers to gain the skills and understanding necessary to extract pertinent information from such data.

There are, however, additional factors that may affect the extent of teachers’ and principals’ use of data. Whereas teachers’ lack of knowledge and understanding can clearly have a limiting influence, their attitudes, beliefs, and perceptions also impact on the degree to which they attend to statistical information. First, teachers may have reservations about the value and validity of externally mandated testing, and this may influence their level of engagement with the resulting data. Second, negativity towards statistics is well entrenched in the community. Wallman (1993) has noted a common series of ‘mis-es’ in relation to statistics: misunderstanding, misperception, mistrust, and misgivings. The Statistical Society of Australia (2005) in 2005 pointed out that statistics has a poor image and profile in Australia among students, parents, and the general public. Negativity towards statistical information and lack of confidence in analysing statistical data may discourage education personnel from other than cursory interaction with such information. Previous research (e.g., Gal et al. 1997; Pierce 1989, 1995) has shown that mathematics anxiety, for example, can inhibit both the learning and use of statistics. Any study of statistical literacy for the workplace must go beyond consideration of knowledge and skills to identify such barriers. Engagement with quantitative system data and adoption of its use as a basis for decision-making and planning are both unlikely to occur unless teachers both perceive the use of statistics to be valuable and are confident that they have the necessary skills to use them.

Adopting new practices—in this case using quantitative data as a basis for decision-making—involves a change in behaviour for teachers. One of the theoretical models for examining behaviour change is the Theory of Planned Behaviour (TPB) (Aizen 1991). This model suggests that people are unlikely to change unless they have a strong intention to change. Francis et al. (2004, p. 7), elaborating on TPB, explain that predicting whether a person intends to do something, requires knowledge of :

  • whether the person is in favour of doing it (attitude) [e.g., ‘I can see the benefits of data-driven decision-making so I want to learn more about it’];

  • how much the person feels social pressure to do it (subjective norm) [e.g., ‘No one else is bothering to use these data, so why should I?’]; and

  • whether the person feels in control of the action in question (perceived behavioural control) [e.g., ‘I don’t know enough statistics, so I can’t use the data for decision-making’]. (Emphasis in the original text.)

If the ‘score’ on these three predictors can be improved, it should increase the chance that the person will intend to perform the desired action and thus increase the likelihood of the person actually doing it. In studies across the health, social, and behavioural sciences (see, for example, Armitage and Conner 2001) TPB has consistently shown that attitudes, subjective norms, and perceived behavioural controls are strongly predictive of behavioural intent. The issues associated with behaviour change regarding data use for principals and teachers can be seen to parallel those explored in other TPB studies related to, for example, the use of technology (Pierce and Ball 2009). The present pilot study employs the framework of TPB to identify principals’ and teachers’ negative perceptions (barriers) and positive perceptions (enablers) both of engaging with quantitative data and adopting data-driven decision-making.

There are a number of frameworks that have been used, particularly in health, for investigating and encouraging behavioural change. The model developed by Prochaska and DiClemente (1983), for example, guides the description of stages of behaviour change, and focuses on the act of changing itself. Such models focus on the processes involved, and establishing sustained patterns of behaviour. The TPB, in contrast, focuses on the affective factors that impact on intentions to change, and theorises that these intentions then impact on the extent of behavioural change. It takes into account both internal attitudes, as well responses to external influences such as social pressures and perceived capacity to change. It was chosen for this study as it provides a framework to focus on such attitudes and perceptions and should permit researchers to identify enabling factors or barriers to teachers’ intentions to change their use of system data, such as that arising from NAPLAN testing. This, in turn, might inform the development of programs that address these factors, such as by providing professional development that enhances teachers’ self-confidence in using statistics, thus increasing the likelihood of change actually taking place.

The present study

The participants

The present study was a pilot, examining the affective factors influencing the use of externally supplied school and class assessment data and, to a lesser extent, its actual use. English and Mathematics teachers were targeted because the national assessment program currently focuses on literacy and numeracy. Data were collected from 84 teachers, as follows: Forty-nine secondary school Mathematics teachers, from 16 schools involved in a Years 7–10 mathematics teachers’ professional development program, volunteered to complete a pen-and-paper survey and gave permission for their data to be used for research purposes. (Ethics approval for such data collection had previously been granted by the Melbourne Graduate School of Education’s Human Research Ethics Committee.) Years 7–10 English teachers, at the same 16 schools were emailed, via the schools’ principals, and invited to complete the same survey anonymously online. Thirty-five English teachers provided data via this method. The pilot study’s purpose was to trial items, obtain some preliminary findings, and determine critical themes for a future large-scale study.

The survey instrument

The survey consisted of eight background items on the use of national assessment data in the teachers’ schools, as shown in Table 1. This background section probed the schools’ and teachers’ access to and use of Achievement Improvement Monitor (AIM) data or National Assessment Program–Literacy and Numeracy (NAPLAN) data. Both sets of data were referred to, since both were part of a national testing program for literacy and numeracy during the 2008–9 timeframe of the study. The change from the Victorian state-wide AIM tests (Victorian Curriculum and Assessment Authority 2009) to the nationally-based NAPLAN tests (MCEECDYA 2009) had taken place in the previous year so, whereas the data reports were similar, teachers had only had experience of one set of NAPLAN reports. The data from the eight background items were linked, where possible, to the teachers’ demographic details (gender, and years of teaching), supplied on an earlier, unrelated survey by the Mathematics teachers (using an anonymising coded identifier) and in items included in the online survey for the English teachers.

Table 1 Survey items targeting school background and teachers’ use of AIM/NAPLAN data

Following the background items there were 30 Likert-scale items consisting of statements to which the teachers were asked to indicate their level of agreement, on a 5-point scale from Strongly Disagree to Strongly Agree. The design of the items was guided by the principles of TPB, as explained in Francis et al. (2004). Items targeted attitudes (13 items; see Table 2), subjective norms (4 items; see Table 4), and perceived behavioural controls (13 items; see Table 5). Items were based on the health-related examples given in Francis et al., and adapted to suit the educational setting. In this case, the behaviour of interest was ‘use of NAPLAN data’, and so questions were designed to investigate attitudes to this use, and subjective norms and perceived behavioural controls judged likely to be relevant to teachers. Because this was a pilot study, the TPB investigation was limited to this 30-item survey, and no interview component was included, although the background items in Table 1 did permit open responses addressing factors affecting data use. Note that Francis et al. suggest that as few as 12 items on a survey—some targeting each of the three constructs—can be adequate for preliminary or limited analysis to predict variations in behavioural intention. In the survey, as presented to the teachers, the three different categories of items were interspersed, but for the purpose of presenting the results they have been grouped by category into Tables 2, 3, and 4, and renumbered. For simplicity, in the Likert items ‘AIM and NAPLAN reports and data’ were referred to simply as ‘NAPLAN reports/data’. This convention will be followed throughout this paper.

Data analysis

Data were tabulated as simple frequency/percentage counts. Percentage values were based on valid responses and the number of valid responses, n, for each item is included in the tables. Percentages were rounded to the nearest whole percentage point; due to this rounding the percentages may not total 100%. In some of the reporting it was useful to combine the frequency percents for Strongly Disagree and Disagree responses, as well as for Agree and Strongly Agree. In such cases, ‘disagree’ and ‘agree’ do not have an initial capital.

Statistical associations between variables were investigated using the low-powered Fisher’s Exact test values. This was used in all examples for consistency since, in some cases, the conditions for using the conventional Chi-squared test of independence were not met. Although a p-value of 0.05 was used to identify results designated as significant, exact p-values have been reported. Finally, the qualitative data arising from responses to open-ended questions were used in only a limited way, to exemplify themes arising in the quantitative data. Where individual responses have been given, teachers are identified by E or M for English or Mathematics teachers respectively, together with a numerical identifier.

Results and discussion

Access to the AIM/NAPLAN data

The initial background items from the survey gave some indication of current practices related to the accessibility and use of NAPLAN data, and allowed teachers to comment about these. Questions 3, 4, 5, and 7 from Table 1 are examined here. Sixteen of the teachers said that their school did not provide them with any information or reports, although it must be noted that there was a poor response rate for this question from the English teachers, with only 11 of 35 responding, compared to 46 of 49 Mathematics teachers.

Only 44 of the 84 teachers said that they actually had access to data for their classes (29 of 35 English, and 47 of 49 Mathematics teachers responded to this question). Fifteen teachers said they did not actually choose to access the data, despite the data’s availability. From among the teachers who said they did not choose to access the data there were four comments mentioning lack of time. Further comments claimed, for example, that since NAPLAN was a ‘general test’ the teacher was not able to help students because of a claimed inability to see, in the data, exactly what students did wrong. Another participant expressed the belief that teachers would be told (presumably by more senior staff) about important implications and required actions arising from the results. One further detailed response, from an English teacher (featured below) highlights the barriers that, for this teacher, meant that he/she did not intend to use the NAPLAN data to inform decisions regarding teaching. This teacher expressed a negative perception of the value of the data (attitude) and perceived several behavioural controls: a claimed lack of understanding, lack of knowledge of how to find the data, and lack of time to find or study the data.

Whilst I find it useful to see if my own view of a student matches up with their test score, I am not prepared to spend the time on something I don’t understand or perceive as ‘good’, when I’m already flat out trying to be creative within the already imposed restrictions.

I am not sure where [the data] is or how I would use it. I assume it may help with planning but in a day to day sense I don’t have time to request/find it! (E1)

Of the 32 teachers who said they did access the data, there were comments about the ways in which data were used, with 22 saying that they used the data to determine students’ levels of understanding, weaknesses, and strengths. Two teachers specifically commented that difficulties in particular learning areas could be identified from the data.

In general, from the responses, teachers appear to have limited direct access to the data (only 52% of the participants). In future research, the nature of access needs to be clarified. However, even where access was available, a significant proportion of teachers (27% of those with direct access) were choosing not to avail themselves of it, with time featuring as a factor perceived to be restricting engagement with the data (see also the section below on perceived behavioural controls).

Factors affecting teachers’ intentions to engage with AIM/NAPLAN data

The Theory of Planned Behaviour was used as a basis for examining factors that may be potential barriers to or enablers for teachers’ intention to engage with and make use of NAPLAN data to inform their teaching practice. For each factor, the data for the whole group are presented, and then the influences of access to data, teacher type (Mathematics or English), and gender are analysed. It is noted here that an analysis of each of the 30 items probing teachers’ views against ‘years of teaching’ did not reveal any statistically significant associations in this data set.

Attitudes

The results presented in Table 2 indicate that most of these teachers hold positive or at least neutral attitudes towards NAPLAN data. The strongest positive responses were to items A3 and A8, with over 77% of the teachers agreeing that ‘NAPLAN data is useful for identifying weak students’ and ‘I think that it is important that I have access to the NAPLAN data from my own students’.

The highest negative responses (and also lowest positive responses) were for items A5, ‘NAPLAN data is useful for identifying students’ misconceptions’ (with 25% of respondents expressing disagreement), and A12, where 30% agreed with the negative statement ‘NAPLAN data doesn’t tell me anything that I don’t already know about my students’. It should be noted that whereas very few teachers (9%) feel that NAPLAN data do not reflect their students’ abilities (A13), fewer than half of them agreed that the data do reflect students’ capabilities (see also A6).

Table 2 Percentage response to survey items targeting attitude

Turning to possible associations with other factors, there was a statistically significant association between several attitude items and teachers’ responses to the perceived behavioural control Item BC2: ‘I have access to NAPLAN data in a form that allows me to get the results and analyses that I require’. The majority of those teachers who agreed with this item also agreed with items A3 (92%) and A5 (69%), which are items about NAPLAN reports’ usefulness in identifying weak students and students’ misconceptions respectively. In contrast, as seen in Table 3, for those who disagreed with BC2, the agreement percentages were 80% and 12% for A3 and A5 respectively, suggesting that teachers who did not feel that NAPLAN reports were in a useful form still thought these reports could help identify weak students, but that they are not useful for identifying student misconceptions. The statistical associations were confirmed by Fisher’s Exact (F.E.) tests: A3, F.E. = 9.077, p = 0.031; and A5, F.E. = 21.723, p = 0.000. There was also an association between BC2 and A4 (‘NAPLAN data is useful for identifying topics needing attention’) with F.E. = 13.393, p = 0.005. These results together suggest, perhaps not surprisingly, that there may be a link between teachers’ perceptions of the usability of supplied data and what they actually believe it can tell them.

Table 3 Percentage of teachers responding to key attitude items A3 and A5 by perceived access to the data

There were no statistically significant associations between attitude items and either gender or years of teaching; however, there were significant associations with teaching discipline. Mathematics teachers were more likely to be positive about the value of using NAPLAN data and to say they wanted to make more use of the data. The response patterns for the two groups was similar for A3, concerning the identification of weak students, but for A4 (identification of topics needing attention) 85% of Mathematics teachers agreed, compared to 38% of English teachers, with 4% of Mathematics teachers disagreeing as compared to 27% of the English teachers (F.E. = 17.80, p = 0.000). A similar, but less strong pattern was also evident for A7, which addressed usefulness for identifying students’ knowledge (Mathematics positive 60%, negative 8%; English positive 35% and negative 31%).

Mathematics teachers were more likely to think that greater use should be made of NAPLAN data by both schools and themselves personally, since 71 and 86% of Mathematics teachers and only 38 and 42% of English teachers, respectively, agreed with A9 and A10 (A9 F.E. = 7.918, p = 0.018 and A10 F.E. = 15.451, p = 0.000 respectively).

As seen above, there are some interesting contrasts in the results that are summarised here. Whereas it appears that teachers see that NAPLAN data has some value for identifying weak students, there is not agreement that the data necessarily reflect students’ abilities. There is, however, agreement that the data are important to teachers, and that they want to be able to make more use of the data than they do, although this was markedly stronger among Mathematics teachers. Neutral responses were quite high for some items, suggesting an ambivalence that, like some of the negative attitudes, may result in only limited use of NAPLAN data.

Subjective norms

As mentioned, most teachers expressed a desire to make more use of NAPLAN data (Item A10). This attitude did not appear to be prompted by what they perceived as the behavioural norms for their school, as seen in Table 4. Only 16% of the teachers in the study felt that their school expected them to engage closely with the NAPLAN data for the students they were teaching (Item SN1), and 33% felt that other teachers whom they respected, took little notice of the data (Item SN4).

Table 4 Percentage response to survey items targeting subjective norms

There were no statistically significant associations between subjective norm items and years of teaching, teaching discipline or whether or not teachers had access to the data (BC2). Interestingly, there was an association between gender and SN4 (‘Other teachers whom I respect take little notice of our school’s NAPLAN data’), with male responses showing 55% disagree and 9% agree, whereas for females the respective percentages were 15% and 44% (F.E. = 8.092, p = 0.015).

These data suggest that teachers are not feeling any sense of pressure to engage with the NAPLAN data. Even a question such as SN2, which concerns parental pressure to know the data about their child, had only a quarter of the teachers agreeing that this expectation existed. This may mean either that the teachers are not feeling an expectation that does exist, or that the expectation does not exist at all. In either case, this too has the potential to impact on whether or not such system data will be used.

Perceived behavioural controls

Table 5 presents the data about perceived behavioural controls. The majority of teachers perceived that lack of access, lack of time, and lack of guidance for interpreting reports were issues that affected their use of NAPLAN reports. These factors have also been identified elsewhere. Both Matthews et al. (2007) and Roehrig et al. (2008) have noted the importance of teachers having access to their own class data and the need for sufficient time to digest and discuss the data. Half of the 30 teachers from 5 Florida schools, interviewed by Roehrig et al. (2008), cited lack of time as a challenge they have faced in attempting to use program monitoring data to inform their teaching.

In the study reported in this paper, only 40% of the teachers agreed that they were given the NAPLAN reports, and just 35% said that they were in a form that allowed them to do the analysis they required (Items BC3 and BC2, respectively). Less than a quarter of the teachers suggested that they had enough time to study the NAPLAN data (Item BC11), whereas half of the teachers expressed a desire for guidance on how to interpret NAPLAN data (Item BC10).

Barely half of the teachers were confident that they could understand the statistical analysis (Item BC7), and even among the secondary school mathematics teachers, a mere 61% were confident about understanding the statistical analysis. Just over a third of the teachers thought that NAPLAN reports were easy to understand (Item BC1) and 35% were neutral. Responses to Item BC12 (‘I am not sure how to make sense of the NAPLAN reports’) were more negative. Only 20% gave positive responses to Item BC8 (‘Most secondary teachers, not just mathematics teachers, are able to understand the NAPLAN reports’) and 43% disagreed with the idea that ‘most secondary teachers, not just mathematics teachers, are able to understand the statistical analysis of NAPLAN data’ (BC9).

Table 5 Percentage response to survey items targeting perceived behavioural controls

There was a clear association between BC2 ‘I have access to NAPLAN data in a form that allows me to get the results and analyses that I require’ and teachers’ perception of their ability to understand and use the NAPLAN data provided (Items BC1, BC4, BC6, BC7, and BC12, with lowest F.E. = 9.588 and p-values less than or equal to 0.045). This is detailed in Table 6. Those who responded positively to BC2 were most likely also to respond positively to these other perception items: BC1 (65% of those who agreed with BC2 agreed with BC1), BC4 (81%), BC6 (52%), BC7 (81%), and BC12 (disagree 81%); whereas those who responded negatively to BC2 also tended to respond negatively or neutrally to this set of Items BC1, BC4, BC6, BC7, and BC12 (negative responses 40%, 53%, 26%, 35%, and 8% respectively). The mixed response to Item BC6—‘The NAPLAN reports which teachers at our school see are easy to understand’—suggests that investigation of the format of these reports and the statistical literacy required to use them effectively, warrants further investigation.

Table 6 Percentage of teachers responding to behavioural control items by perceived access to data

There was also an association between teaching discipline and BC6 (reports are easy to understand) and BC7 (personal understanding of statistical analysis) (F.E. = 8.041, p = 0.018; F.E. = 8.104 and p = 0.016). The results displayed in Table 7 show that Mathematics teachers were more likely to respond positively to these items than English teachers, but also that both groups were ambivalent about the ease of understanding the NAPLAN data reports they saw.

Table 7 Percentage of teachers responding to Items BC6 and BC7 by discipline area

The perceived behavioural controls data concern what teachers think they are capable of doing—in this case, with respect to the use of NAPLAN data. It is, perhaps, not surprising that more Mathematics teachers, in comparison to the English teachers, should feel that they are able to understand the data; what is striking is that, even so, over one-third of the Mathematics teachers were neutral or not confident about their capacity to understand the statistical analysis, and that fewer than half of them thought the NAPLAN reports were easy to understand. Having the data in a useable form, and having time to study them, also seem to be significant factors here, and again may affect the extent to which teachers engage with the data. Finally, the prevalence of neutral responses about perceived behavioural controls is also likely to have an impact on actual engagement.

Changes in teaching practice

There were 72 teachers who responded to the question about whether or not they had made changes to teaching plans based on some analysis of their school’s AIM or NAPLAN data (Question 8 in Table 1), with 61% saying that they had not. One teacher commented that he/she would like to, while two others explicitly stated that the data just reinforces what they already know about their students. Several comments from the 39% of teachers who said they had made changes, mentioned that teachers had made modifications to programs for both stronger and weaker students, or focussed on areas of identified weakness. The following five comments illustrate some of these positions.

  • Targeted specific areas of language conventions, exercises on reading for meaning and vocabulary extension. (E2)

  • More overt teaching of spelling. (E3)

  • A number of students have presented with major difficulties and an individual education program has been implemented for some of them. Some of these students have other issues impacting on their learning. (M1)

  • Realised remedial Year 7 group was weaker than originally thought. Brought in much more concrete tasks. (M2)

  • Identified need and concentrated on the area identified with enrichment work. (M3)

The wording of this particular item requested only brief details of what changes teachers had made, and so it is not clear to what extent they were responding to data about students’ individual needs, whole-class needs, or topic weaknesses. There is evidence of all of these aspects in the comments, but this needs to be pursued further in future research.

Implications and conclusions

The results of the combination of the open-response items and the TPB-framed Likert items make it clear that there was considerable variation in teachers’ attitudes and perceptions related to the use of NAPLAN data and that in many cases, these attitudes and perceptions may have presented barriers to the teachers’ intentions to use such data to inform their planning. This pilot study indicates that teachers saw potential for using the student assessment data arising from external testing such as AIM and NAPLAN. In particular, they saw its value for the identification of weak students and of curriculum topics that need attention. Despite this perception of usefulness, however, most of the group felt under no pressure to engage with the data; moreover, lack of access to the data was a key perceived behavioural control. Most teachers indicated that they wanted more guidance on how to make use of the data, with many expressing a concern that the reports were not easy to understand. The teachers further perceived that those without a mathematics background may have difficulty making sense of the NAPLAN reports.

From the results related to association it is evident that there are many interacting factors affecting teachers’ engagement with school assessment data. These preliminary results suggest that there would be value in conducting a study that works through the full survey and interview process for TPB outlined by Francis et al. (2004). This would involve detailed interviews with teachers to inform the constructions of a wider range of items. The resulting questionnaire would also include items to indicate the importance or weight that should be attached to a given teacher’s responses. As an example, to interpret fully an item like ‘The leadership team at my school expects me to closely analyse my students’ NAPLAN’ (SN1) it is necessary to include items such as ‘It is important to me that the leadership team at my school approve of my work’. Interview data would also give further insight into the strength of the barriers to engagement and how to overcome these. The present study has given an initial indication of areas that are worthy of further examination.

This data set came from two relatively small groups, namely Mathematics and English teachers from the same set of schools. Although it has provided some insight into the affective issues that might influence teachers’ engagement with assessment data, to obtain a more accurate picture a larger strategic sample is required. This, together with a more complete TPB research design, would give a clearer indication of the strength and influence of the attitudes, subjective norms, and perceived behavioural controls that drive teachers’ behaviour.

Finally, there is a need to investigate the statistical literacy needed to interpret and make use of these reports, to determine teachers’ levels of statistical knowledge, and what areas might require professional development. These two factors—statistical competence and affect—together will govern the extent to which teachers and principals are able to interpret data and make consequential teaching and policy decisions that might lead to better outcomes for students. If, like teacher E1, intended users lack the necessary skills, do not believe in the value of the data, and perceive organisational and time barriers, the potential benefits of such large-scale testing and reporting likely will not be realised.