Introduction

Data-driven decision making continues to be a growing educational reform initiative in countries across the globe, including the US, Australia, Canada, the Netherlands, Spain, South Africa, and New Zealand, among others (Schildkamp and Lai 2012). While approaches to data use vary, the theory action underlying these efforts is often similar. It is believed that by carefully analyzing evidence about student learning, teachers will be able to prioritize instructional time, better target instruction towards students’ individual needs, and refine instructional methods (Hamilton et al. 2009). Data use is also believed to be an important ingredient of school improvement planning.

Data-driven decision making may positively influence organizational learning. When data use is characterized by systematic and sustained reflection on multiple forms of data, it can inform a process of continuous improvement (Senge 1990). Data use can then become infused into the structure and culture of the organization. Conversely, when data use is characterized by sporadic examination of tests results, it is not likely to have the desired results. In other words, data-driven decision making can take many different forms depending on what data is used, for what purposes, and by whom (Datnow and Park 2014).

In recent years, the amount of data that schools have available has increased immeasurably. However, building educators’ capacity for data use continues to be a vexing issue. Teachers vary in their ability to use data effectively, with many feeling unprepared. Teachers also vary in their beliefs about data use and its utility. The purpose of this article is to draw on extant literature to present a framework for understanding teacher capacity for—and beliefs about—data use. The research questions guiding this investigation are: what efforts have been undertaken to build teachers’ capacity for data use? What is the range of teacher beliefs about data use? We examine these questions by connecting teacher capacity building efforts with teachers’ belief systems about data use, something that tends not to occur. Focusing on this important relationship provides a richer understanding of data use and offers opportunities to create conditions that better support teachers in their efforts to use a range of data to improve instruction.

Methodology

The methodology for this article is a review of literature. We applied two criteria for inclusion in the review. First, the publication had to include information on efforts to build K-12 teachers’ capacity to use data or teachers’ beliefs about data use, or both. Second, the source had to be published after 2001, since the advent of NCLB in the US. Most district and school efforts at data use were launched since this key policy initiative in the US. Around the same time, data use efforts were also launched in other countries, particularly as accountability policies were coming into full force across the globe. Thus, this review is international in scope, though many of the studies were conducted the US. All of the included studies were written in English.

In searching for studies that met the two criteria, we used Google Scholar, ERIC, and library databases. We first used the terms “teachers” along with “data use” and “data driven decision making.” We then conducted more refined searches by using the key terms of “teacher capacity,” “teacher training,” “teacher professional development,” “teacher beliefs,” and “teacher values” combined with data use. Reviewing relevant sources led us to other sources that had not come up in our original searches, some of which were on topics related to capacity building, such as teacher efficacy. Colleagues and reviewers also recommended sources. We found that most studies that addressed the issues of teacher capacity and beliefs were part of larger studies on the teacher role in data use, or on data use efforts more generally. As Hamilton et al. (2009), who conducted a rigorous review of research in generating a federally funded practice guide on data use, pointed out, “studies of data use practices generally look at a bundle of elements” (p. 18). Only a handful of studies focused specifically on teacher capacity for and/or beliefs about data use. A greater number of studies focused on capacity building than teacher beliefs.

Our review of the research draws upon qualitative and quantitative research studies, mixed methods studies, literature reviews, and conceptual/theoretical publications on the topics of teacher capacity for and beliefs about data use. Most of the studies are qualitative or use survey research methods. This is not surprising as these tend to be the primary methods by which educational researchers gather insights from teachers. While we prioritized peer-reviewed research articles and the majority of publications fall into this category, we also include numerous important reports (e.g., Institute of Education Sciences practice guides and research center reports), books, and book chapters that seemed particularly critical to include. We felt it was important to go beyond peer-reviewed research articles since the other selected sources included a wealth of knowledge on the topic, and we felt that if we excluded them, we would be ignoring key sources on the topic. We typically only included books and book chapters if we had been referred to them from other sources. For example, a reviewer asked us to include references to popular training models for data use, and some of these are described in books. In total, the review of research presented in this article includes a total of 43 journal articles, five books, six reports, and six book chapters on the topics under study. We also draw on a small number of other sources (three books and two articles) that relate to general conceptual issues in school reform.

A limitation of our study is that we undoubtedly did not include every relevant publication on the topic, as our search methods may not have captured all sources. However, while the list of sources we include may not exhaustive, we believe the included works yield critical findings from the literature. We begin our review with a discussion of efforts to increase teacher capacity for data-driven decision making. In doing so, we first discuss what it means to have the capacity to use data, in other words, what knowledge and skills are required.

Teacher capacity for data-driven decision making

Teacher capacity for data use demands a range of knowledge in order to make sense of and use data on student learning in meaningful ways. In order to use data in the service of instructional improvement, teachers often need new knowledge and skills (Gummer and Mandinach 2015; Mandinach et al. 2015; Mandinach and Gummer 2013). Teachers must have “the skills to analyze classroom questions, test items, and performance assessment tasks to ascertain the specific knowledge and thinking skills required for students to do them” (Brookhart 2011, p. 7). Teachers also need to understand the purposes and uses of the range of available assessment options and must be skilled in translating them into improved instructional strategies (Brookhart 2011).

These are important skills, but data use is not restricted to assessment data. Teachers also need to know how to examine and use a wide range of other information they gather on student learning and use it as part of data-informed instruction. Means et al. (2011) argue that teachers also need to be able to pose actionable questions, find the right data to use, figure out what the data say, make meaning from it, and apply it to planning instruction. Interpreting data and applying it to instruction can be “problematic when they [educators] lack substantive knowledge of the subject matter relevant to the decision” (Coburn and Turner 2011, p. 179). Thus, content knowledge and pedagogical content knowledge are also critical to data informed instruction (Gummer and Mandinach 2015).

Gummer and Mandinach (2015) summarize these and other important elements in a comprehensive definition of what they call data literacy for teaching:

Data literacy for teaching is the ability to transform information into actionable instructional knowledge and practices by collecting, analyzing, and interpreting all types of data (assessment, school climate, behavioral, snapshot, longitudinal, moment-to-moment, and so on) to help determine instructional steps. It combines an understanding of data with standards, disciplinary knowledge and practices, curricular knowledge, pedagogical content knowledge, and an understanding of how children learn. (p. 2)

Gummer and Mandinach’s (2015) definition of data literacy for teaching is much more encompassing than other definitions that are typically used by authors examining teachers’ use of data. This is in part because data use means different things in different contexts. In some cases, data use is defined as actions in which educators draw on a range of data (from formal and information assessments to observations, surveys, and climate data) to inform practice (Jimerson and Wayman 2015). In other contexts, data use is defined as teachers’ use of benchmark assessment data (e.g., Blanc et al. 2010). In general, benchmark assessment data tend to predominate in teachers’ work with data, in part because many districts have prioritized the use of such data (Datnow and Hubbard 2015). However, data use is not limited to these data.

Indeed there are wide variations in notions of data use in school and district contexts. Cho and Wayman’s (2014) study focused on the teachers’ use of computer data systems and examined teachers’ notions about data as a possible explanatory factor. They found that teachers’ notions about data and data use varied considerably across districts, pointing to the important role district leaders play in framing data use efforts (Park et al. 2012). For example, in one district, teachers talked mainly about state test data and did not connect data use with instruction. In a second district, data use was seen as a process of examining benchmark assessments with the aim of providing ongoing feedback on instruction. In a third district, teachers found routine classroom data to be most informative in informing instruction. Within these districts, conceptions of data use also varied somewhat by role, with teachers holding different conceptions than district or school administrators, in some cases. As the authors note, “to understand how ‘data’ can best inform practice, it is important to first understand the variety of notions of what ‘data use’ is and how these notions affect practice” (Cho and Wayman 2014, p. 30).

As Farley-Ripple and Buttram (2015) explain, “there is little evidence on how capacity for data use develops” (p. 2). Many district efforts to implement data use have focused on building data systems and helping teachers learn how to access data, rather than focusing on the skills teachers need to use data to inform instruction. Moreover, capacity building efforts have focused primarily at the individual and organizational levels, rather than addressing how capacity is embedded in social relations (Farley-Ripple and Buttram 2015). Indeed Gummer and Mandinach (2015) acknowledge that the development of the knowledge of data use for teaching is both an individual and collective endeavor. As Earl and Katz (2006) explain, teachers need opportunities to develop and practice their skills at using data in order to help them move along a developmental continuum from novice users of data to ideally eventually becoming expert users of data.

In the sections below, we review findings from a broader body of literature on structured collaboration, coaching, university and consultant partnerships, other forms of training, and leadership—all efforts that have been undertaken to improve teacher capacity to use data. When we speak of capacity building for data use, we adopt Jimerson and Wayman’s (2015) definition of “data-related professional learning to mean the activities in which educators participate to develop skills and knowledge relevant to data use” (p. 3). In general, relatively few studies examine the intersection of professional learning and data use in depth; most studies investigate professional learning as part of a broader study of data use (Jimerson and Wayman 2015).

Structured teacher collaboration time

Providing structured time for collaboration is one of the primary ways that many districts and schools attempt to build teachers’ capacity to use data (Farley-Ripple and Buttram 2015; Honig and Venkateswaran 2012; Horn et al. 2015; Marsh 2012; Means et al. 2010). Strong instructional communities organized to analyze data can assist teachers in using data in productive ways (Blanc et al. 2010; Cosner 2011a; Datnow et al. 2013; White and Anderson 2011). For example, White and Anderson’s (2011) study of Australian math teachers found that when professional learning opportunities were arranged so that teachers could dialogue around data and strategize about pedagogy, teachers’ instruction and student achievement improved.

Some studies focused on the tools that enable data collaboration to be most productive. For example, Firestone and González (2007) discuss data reflection worksheets that can enable teachers to analyze data. The authors recommend: “The district office can easily facilitate the effective use and understanding of assessment data by providing educators with step-by-step instructions, through a formal staff development process or through a manual that guides their analysis by communicating a common protocol for simple data analyses” (p. 151). Various tools for supporting teachers’ use of data, including district protocols for analyzing data and reflecting on data, assist in the process of data use, in some cases (Christman et al. 2009), but prove constraining in others (Datnow et al. 2013). In Datnow et al.’s (2013) study, three districts that were leaders in data use developed or adopted data discussion protocols in order to ensure that discussions about data occurred and that actions were taken on the basis of these conversations. These tools typically guided teachers to identify trends in the data, reflect on results, and develop action plans for improvement. Many teachers described productive work related to these data discussion protocols. However, in other cases, the administrative regulation that accompanied the protocols led some groups of teachers to focus on the tasks (e.g., completing a form), rather than more meaningful discussions around data.

The prevalence of structured collaboration time is indicative of trends in the field of professional development towards an understanding that teaching is a contextualized activity and the need to engage teachers in collaborative inquiry about their practice (Schnellert et al. 2008). Although numerous studies have documented the benefits of collaboration, studies also find that structures such as grade level agendas and cultural norms, as well as the level of expertise in the group, all play into teacher collaboration around data use (Horn and Little 2010; Young 2006). Teacher teams with limited expertise can misinterpret or misuse data, or work together to perpetuate poor classroom practice (Daly 2012). Teachers are sometimes also uncomfortable with what happens in collaboration meetings (Jimerson and Wayman 2015).

The variance in the quality of conversations can impact student learning, as Timperley (2009) found. In her study of schools in New Zealand, conversations in which there was a sense of urgency to address student progress and in which multiple sources of data were brought to bear were found to be more generative than those in which the purpose of data use was not clearly defined (Timperley 2009). Similarly, Horn et al.’s (2015) study found that teacher workgroups used different logics when interpreting the same district math assessment data, one of which was focused on instructional management (e.g., categorizing students according to achievement levels) and another that was focused on instructional improvement (e.g., diagnosing student mistakes). Meeting routines also shape the substance of conversations, how data are interpreted and noticed, and how teachers talk with each other (Coburn and Turner 2011; Horn and Little 2010).

An additional issue with collaboration as a form of capacity building for data use is the extent to which knowledge that is generated travels outward. Jimerson and Wayman’s (2015) study “identified no mechanisms to share knowledge outside the collaborative entity” and “learning gained with a collective typically stayed within the collective and was not available for the use of other educators” (p. 27). Thus, even though collaboration was touted as a major vehicle for capacity building, it did not meet all of the intended goals.

Marsh et al. (2015) compared three popular interventions to assist teachers in developing their capacity to use data, including professional learning communities, data coaches, and content area coaches. The authors used the concepts of vertical and horizontal expertise to understand how these interventions support teachers. The authors found that when changes in instructional delivery were observed, two-thirds of the time teachers had experienced a professional learning community or coaching. However, the types of instructional changes that resulted from each intervention were different. Coaching was more likely to result in longer-term changes, whereas in the professional learning communities, teachers shared discrete strategies and they functioned primarily as a clearinghouse for ideas.

Coaching

Marsh et al.’s (2015) article and other related publications examine in depth the role of coaches in assisting with teachers’ capacity for building data use (see also Huguet et al. 2015; Marsh and Farrell 2015). They found that in order for coaching on data use to be effective, teachers needed to believe the coach possessed strong interpersonal skills and content and pedagogical knowledge that would be useful for them to learn (Marsh et al. 2015). A core set of coaching practices facilitated teachers’ capacity building for data use. These included assessing teachers’ needs, modeling how to interpret and act upon data, and observing teachers while they were attempting to engage in the data use process (Huguet et al. 2015). Coaches also provided feedback and shared expertise with teachers and functioned as brokers in connecting teachers to expertise and resources that could support them in using data. While all of the coaches in Huguet et al.’s (2015) study exhibited these behaviors, the “strong coaches employed a broader range of practices with more frequency than did their developing counterparts” (p. 13). Less effective coaches focused on helping teachers analyze data but did not make the links to classroom instruction.

Stronger coaches also provided teachers with access to their school’s data management system, whereas weaker coaches were more likely to provide their teachers with copies of reports (Huguet et al. 2015). While the reports were useful to teachers, they did not impact their data use practices. This study also found that the principal was an important mediating factor in coaching practice. When principals structured coaches’ jobs in ways that were sensitive to the political dynamics in the school, it was easier for coaches to develop relationships with teachers (Huguet et al. 2015).

In another study, teacher leaders served a similar coaching function in building teacher capacity for data use (Blanc et al. 2010). Blanc and colleagues examined the use of benchmark assessment data in Philadelphia schools. Teacher leaders worked with the principal to identify and modify instructional interventions on the basis of data. Group meetings in which teachers analyzed benchmark assessment data provided the venue for teacher leaders to engage teachers in a dialogue about these interventions. The work of the teacher leaders was not just focused on using data, but also on developing teachers’ skills to deliver the core curriculum. The authors argue that it is critical that principals and teacher leaders have deep content knowledge and strong facilitation skills and be involved in grade level meetings when teachers are examining data.

Coaches in Farley-Ripple and Buttram’s (2015) study also were influential actors in a data advice network. Classroom teachers tended to approach the four coaches (or the principal) for advice, more so than their classroom teacher colleagues. The authors note that it is unclear whether this is due to the expertise of the coaches or whether it was due to their formal instructional leadership positions. This study also found that certain teachers served as brokers of advice about data use, and these brokers played an important role as “bridges between perceived experts and less connected colleagues” (Farley-Ripple and Buttram 2015, p. 23). Brokers connected coaches with a broader number of teachers, which the authors argue is promising since advice then has the potential to reach advice-seekers indirectly as well.

Cosner’s (2011b) study also examined how principals and literacy coaches worked together in building teachers’ capacity for using data. She found that the literacy coaches played a critical role in developing the content and instructional expertise that teachers needed to engage in data-driven instruction. A partnership between the coach and the principal that was characterized by close collaboration, trust, and openness was also key in facilitating data use efforts among teachers. The data coaches in Lachat and Smith’s (2005) study were important in fostering teachers’ use of data, particularly in modeling data use and building teachers’ skills. Over time, the role of the data coach decreased as data teams matured in their functioning and knowledge.

While coaches can be beneficial for all of the reasons cited above, Hamilton et al. (2009) note that overreliance on coaches can mean that teachers do not personally develop the knowledge required to use data effectively. Another risk to data use efforts occurs when coaches who are supposed to be dedicated to supporting data use are pulled away for other priorities that arise.

Structured training in data use

Interestingly, almost all efforts to build teachers’ capacity to use data rely on the teacher collaboration model. Structured training in how to use data is less common. A national survey published in the US in 2009 found that 43 % of teachers surveyed received some training on how to analyze data from state and benchmark assessments, although they did not find it adequate (Means et al. 2009). A national survey the following year found that 53 % of districts offered training to teachers on how to access data systems and changing instructional practice on the basis of data (Means et al. 2010). The scarcity of resources for professional development was cited as the major barrier. Most studies find that teachers have had little structured professional development to aid in their understanding of data or in their instructional planning on the basis of data (Davidson and Frohbieter 2011; Dunn et al. 2012; Kerr et al. 2006; Jimerson and Wayman 2015; Mandinach and Gummer 2013; Means et al. 2011; Wayman and Cho 2008).

When training does exist, it often focuses primarily on accessing the data management system (Jimerson and Wayman 2015). As Hamilton et al. (2009) state, “Training for data use often is synchronous with technology training” (p. 36), which can often be ill-timed (Jimerson and Wayman 2015). However, the focus on technology ignores the important element of what to do with the data once you have access to it. Teachers are rarely trained in data or assessment literacy or in how to change instruction on the basis of data. Teachers in Jimerson and Wayman’s (2015) study also received little support in how to ask questions of the data or in how to codify what they learned from data use. Knowledge was not preserved and supports were isolated.

Indeed, Hamilton et al. (2009) argue that training on a wide range of strategies to use data is necessary. Arising from their review of research, the authors provide a recommendation of training opportunities for teachers on topics including: learning the capabilities of the data system, understanding and using a cycle of instructional improvement, avoiding common data analysis mistakes, data transparency and safety, fostering a culture of data use, interpreting data in context, and using data to modify instruction, among others. Training must also be ongoing and continuous (Means et al. 2010) in order to address teachers’ evolving needs with respect to data use.

Most training efforts are not comprehensive or ongoing, and the effects of a lack of training have been documented in numerous studies. A study of data use in one elementary school found that teachers, lacking in training, struggled to use data to inform instructional practice across the curriculum (Hubbard et al. 2014). Consequently, teachers did not find the data they were being asked to gather on their students to be very useful in informing instruction. Their belief systems about assessment and students’ capabilities also came into play. Teachers questioned the need to test and retest students believing that it was nearly impossible for students who were reading far below their level to meet the grade-level standard. This perspective negatively impacted their efforts to better support student learning.

Similarly, a study in Philadelphia found that training on how to use the district’s benchmark assessment data management system was very limited and variable by site (Christman et al. 2009). Principals and technology support persons at each school received one day of training and were asked to return to their schools and train teachers in how to use the system. The principal training only focused on accessing the data, not on leading conversations about it. Subsequently, teachers only received one half day of training. In some schools, principals didn’t expect that teachers would use the system and instead printed reports for them.

In general, the lack of training limits teachers’ capacity to use data effectively. For example, Davidson and Frohbieter (2011) found that a lack of teacher professional development and support contributed to a failed data use initiative. This was in spite of the fact that district administrators pinned the failure on the teachers’ disinterest in change. In their study of three urban school districts, Kerr et al. (2006) reported that training for teachers with regard to data analysis and interpretation was an important factor in teachers’ ability to use data because capacity gaps were highly visible. The more successful districts in Kerr et al.’s study were the ones that offered professional development to teachers in data analysis. Teachers in Jimerson and Wayman’s (2015) study also reported that they received little guidance from their districts on how to turn data into actionable knowledge, and this hindered their efforts to use data to inform instruction.

University and consultant partnerships

Several studies have focused on the role of university researchers who function as trainers or facilitators of data use. Schildkamp and colleagues’ (e.g., Schildkamp and Poortman 2015) work in the Netherlands documents the ways in which university researchers guide the process of data analysis and bring a theoretical framework to the practice. University researchers known as facilitators gave advice about what data to collect, how to formulate a hypothesis, and how to draw conclusions from the data. The university researcher in Schildkamp and Poortman’s (2015) study also analyzed the data, though she did involve team members in the process. Members of the data team found the university researcher’s contributions were essential in analyzing the data, helped the team maintain its focus, and provided an objective perspective as an outside consultant.

In Schnellert et al.’s (2008) study, university researchers engaged teachers in an ongoing professional development activity that involved collaboration in an instructional change cycle. Teachers collected and analyzed assessment data, co-constructed new instructional strategies, monitored results, and developed action plans. The teachers who were more deeply involved in the reflective cycles of inquiry gained the most in terms of instructional revision. In addition to building teachers’ capacity for reflective practice, the researchers’ ultimate goal was for teachers to involve their students in communities of practice in order to improve their engagement in learning. All teachers gained an appreciation of the need to gather a wider range of data on students, which helped them address instructional goals they formerly overlooked and respond better to students’ instructional needs. They also engaged in more systematic monitoring of student progress. The authors note that they did not systematically examine teachers’ beliefs as part of the study, but they note that the different patterns of engagement they observed may be due to the interaction of teachers’ beliefs, conceptions, and knowledge at the beginning and during the capacity building project.

Another model of collaboration differs from the aforementioned university researcher–practitioner collaborations in that it typically does not involve ongoing researcher/consultant involvement at the school site. Rather, this model relies primarily on institutes or workshops, and in some cases, ongoing consultancy. Harvard University’s “Data Wise Improvement Process” is one such model in which university researchers support school and district teams in the work of using data to improve practice (Bocala and Boudett 2015). Harvard researchers have trained over 2500 school and district teams in the Data Wise process using a variety of formats including online programs, face-to-face institutes, and graduate courses. Similarly, California State University Chico educators who are part of the Education for the Future Initiative work with teams of educators to provide data institutes to enable teams of educators to facilitate data use (see Bernhardt 2013). Outside organizations such as the TERC Using Data Initiative provide a range of training options including on-site training for data teams, online courses, and workshops (TERC, n.d.). So too, Bambrick-Santoyo (2010), a leader of a charter management organization, provides training to school leaders based on the approach to using data in the Uncommon Schools Network. These are but a few examples of the groups or individuals who are oriented toward practice-based, on-site or off-site training that typically focus on the skills and tools educators need to engage in data use, as well as general school improvement planning.

Leadership

School principals are key players in facilitating data use among teachers (Blanc et al. 2010; Cosner 2011a; Datnow and Park 2014; Earl and Katz 2006; Halverson et al. 2007; Levin and Datnow 2012; Mandinach and Honey 2008; Marsh 2012; Park et al. 2012; Schildkamp and Poortman 2015). The principal often plays an important role in allocating resources and time to enable teachers to use data effectively. For example, principals in Halverson et al.’s (2007) study adapted policies and practices to structure social interaction and professional discourse on data use in their schools. In some cases, principals play an active role in training teachers to use data themselves and attend data team meetings alongside teachers (Hamilton et al. 2009).

Not only are principal actions important, but their espoused beliefs about data use are critical as well. Principals help to set the tone for data use among teachers. In Horn et al.’s (2015) study, the one teacher work group’s instructional management logic aligned with the principal’s focus on accountability. They adopted his frame of data use as a monitoring activity, rather than as a vehicle for examining students’ mathematical understanding. As Long et al. (2008) found, “most teachers are influenced by the value their principal places on using data” (p. 223). Principal support for data use carries a great deal of weight with teachers, and only some teachers will use data when the principal is unsupportive (Long et al. 2008). District leaders also play a critical role in framing the purpose of data use and setting the direction for data use practices (Park et al. 2012). At both levels, a productive role for the leader is to guide staff in using data in thoughtful ways that inform action, rather than promoting the idea that data in and of themselves drive action (Datnow and Park 2014; Knapp et al. 2007).

In actuality, there is a range of ways in which leaders scaffold data use, some more generative of continuous improvement than others. Some leaders may promote an accountability-focused culture in which data is used in a short time frame to identify problems and monitor compliance, whereas others may promote the use of data for continuous improvement (Firestone and González 2007). For example, principals in low performing schools in Diamond and Cooper’s (2007) study oriented data use efforts around quick fixes to raise test scores in order to avoid further sanctions. Instructional efforts targeted particular students and grade levels. In contrast, in higher achieving schools, data use efforts were linked to improving teaching and learning for all students.

Principal leadership is key in supporting teachers’ capacity building efforts, but some principals find it challenging to provide teachers with the scaffolding they need to be successful. An elementary school principal in Hubbard et al.'s (2014) study was highly sensitive to the value of data, analyzed benchmark data alongside teachers, and provided teachers time to work in grade-level groups. She collaborated with teachers to identify students who were below grade level and helped them interpret students’ instructional needs. She ensured targeted students received focused attention every week. Despite all of this, the principal admitted that she struggled to enable the faculty to use data and to put student needs at the center of their reform efforts.

Some principal actions also work against the effective use of data. An example of this arose in Schildkamp and Poortman’s (2015) study when a principal in one school used data to “shame and blame” teachers. Obviously this did not engender a culture of trust among the teaching staff. The leadership of this principal contrasted with that of the principals in the other two sites who modeled data-informed action and provided the conditions for teachers to engage in inquiry around the data and take actions on it.

Even though conscientious principals, coaches, trainers, and district leaders have made admirable efforts to build teachers’ capacity to use data, sometimes these efforts are still not sufficient. In the end, most teachers have not been provided with enough training on how to understand and use data, and they have also had little training in assessment either during their preservice or inservice years (Mandinach et al. 2015; Mandinach and Gummer 2013; Young and Kim 2010). Consequently, many teachers are left unable to effectively use data to inform instruction. Teachers’ belief systems about data use are shaped by the structures and cultures in which capacity building takes place. However, teachers’ beliefs about data use are often not addressed as part of capacity building efforts, and yet they are of critical importance—a point we take up next.

Teacher beliefs

Conceptualizing teacher belief systems

Examining teachers’ beliefs allows for greater understanding of their capacity for data use efforts. It also gives us insight into the factors that motivate their actions in educational reform. Teachers come to data-driven decision making with a set of pre-existing beliefs about the value of evidence (Coburn and Talbert 2006; Coburn and Turner 2011; Farley-Ripple and Buttram 2015). Jimerson’s (2014) study of a school district in central Texas revealed that teachers’ understanding of data and data use were specifically tied to their mental models, the ways in which individuals see the world and how they decide the actions they take (Senge 1990). Mental models embody an individual’s assumptions, definitions and beliefs and can construct specific dispositions and actions toward data use. Although they are not necessarily fixed, they are often quite rigid and can prevent individuals from adopting new and different ideas.

Along similar lines, Spillane and Miele (2007) point out that it is more likely that we give “selective attention” to what we consider important evidence of data and that we discriminate and privilege specific ideas that are shaped by “the mental representations that we have abstracted from our experience” (p. 50). What we believe data is telling us and how it is related to other data and to practice is informed by prior experiences. These beliefs are “stored as knowledge representations,” also referred to as “schemas” (Spillane and Miele 2007, p. 51), and they shape our process of interpretation. As Coburn and Turner (2011) explain, “people tend to search for and see aspects of the data that support their beliefs, assumptions, and experiences and do not even notice data that might contradict or challenge these beliefs” (p. 177). This sensemaking process may occur at the unconscious level but it draws attention to the fact that beliefs are also affected by context.

The importance of context is key to Spillane’s (2012) explanation of how educators “frame and interpret what they notice” (p. 126). He explains that interpretations are “not just a function of their [educators] prior knowledge and beliefs, but also a function of their interactions with others in which they negotiate what information is worth noticing and how it should be framed” (p. 126). Sensemaking about data results from interactions that construct meaning and influence actions (Coburn and Turner 2011). This perspective suggests that addressing teachers’ interactions may affect both beliefs and actions in ways that support data use.

Mental models can be reconstructed through formal training, modeling by leaders, social interaction with colleagues, and personal experience (Jimerson 2014). Since efficacy beliefs are co-constructed with others in “communities of practice,” they can be attended to (Takahashi 2011, p. 732). Wenger (1998) explains that individuals who work in communities of practice negotiate meaning and engage in a process of reification, whereby they imbue meaning that affects their beliefs. Investigations of sites of shared practice, therefore, offer the opportunity to examine the relationship between data-driven decision making and belief development. When the context is constructed in a way that supports knowledge sharing, educators’ mental models may change. These arguments are reinforced by the findings we discussed earlier about teacher collaboration as a strategy for developing teacher capacity for data use.

Range of teacher beliefs about data use

What are some of the beliefs or mental models that influence teachers’ use of data? Numerous studies have found that one of the key attitudes that shape teachers’ actions is a lack of confidence in their ability to use data to improve instruction (Bruning et al. 1999; Dunn et al. 2012; Woolfolk et al. 1990). Many teachers believe they do not have knowledge to understand the data and/or to translate it into practice. This is not surprising given the findings we shared earlier on the limitations of capacity building efforts.

Teachers in Pierce and Chick’s (2011) study of data use among English and math teachers in Australia confirm the powerful relationship between capacity, individuals’ beliefs, and data use. When 84 secondary teachers were surveyed as to their attitudes toward and use of results from NAPLAN (National Assessment Program-Literacy and Numeracy), they explained their difficulty in understanding, and their lack of confidence in dealing with statistical data. Although understandably more mathematics teachers than English teachers believed they could understand the statistical data, “what is striking is that… over one-third of the mathematics teachers were neutral or not confident about their capacity to understand statistical analysis and fewer than half of them thought the NAPLAN reports were easy to understand” (p. 444). This lack of access to understanding and making sense of the data created barriers and dissuaded teachers from using data. Not surprisingly, 61 % of the teachers who responded to a question of “whether or not they had made changes to teaching plans based on some analysis of their school’s data” reported that they had not made changes to their instruction (p. 445).

Dunn et al.’s (2012) survey of over 1700 teachers in one US state found that teachers’ beliefs about their own efficacy in using data, coupled with anxiety about the process, limited their ability to use data effectively. Efficacy was defined as “teachers’ beliefs in their abilities to effectively analyze and interpret student data in order to successfully connect or apply their interpretation of data findings to classroom instruction and to improve student learning” (p. 90). In this case, efficacy beliefs translated into teachers viewing the ability to analyze and interpret data as distinctive from the ability to use the data to inform instructional practice. Lack of efficacy, in the presence of even a small amount of anxiety, meant teachers struggled to use data.

While these studies show some uniformity regarding the connection between teachers’ beliefs and dispositions toward data and data use, there is clearly within group variation. It is also important to parse out teachers’ beliefs about assessment from beliefs about data use more generally. A multi-group analysis of teachers in Australia found that although conceptions about assessment among primary and lower secondary teachers were similar in that they were not anti-assessment, their use of assessment was statistically different (Brown et al. 2011). “Primary teachers agreed more than secondary teachers that ‘assessment improves teaching and learning’, while the latter agreed more that it ‘makes students accountable’” (Brown et al. 2011, p. 210). Differences were related to beliefs that mediated policy and outcomes. Teachers also were willing to take professional responsibility for improving school outcomes “while rejecting the notion that assessment should focus on students” (p. 218). This rejection stemmed from concerns regarding “the quality and usefulness of the assessment resources being used to make students and schools accountable” (p. 218). Assessment was viewed as irrelevant if the data was punitive for children or if the validity of the data was called into question. Teachers viewed poor quality assessments as having unjust consequences for learners.

Along similar lines, Remesal’s (2011) qualitative study of 50 primary and secondary mathematics teachers in Spain found that four different belief categories built teachers’ conceptions about assessment: teaching, as separate from the influence of assessment on learning, and accountability as separate from accreditation of achievement. These belief aspects caused teachers to identify assessment as either a positive change or as a disruptive measure. Teachers who held a pedagogical orientation toward assessment (typically primary teachers) viewed assessment as a valuable tool to monitor and support student learning, as an instrument for quality control, and as a way to know if students had indeed learned and reflected on their lessons. Conversely, high school teachers were more likely to hold a societal conception viewing assessment as a “benchmark, a reference point for the establishment of minimum levels of expected performance” (p. 477). The majority of the teachers, however, reported mixed conceptions of assessment. According to the author, this variation in beliefs indicates that the nature of assessment is complex, “all purposes of assessment (for/of learning, accreditation/accountability) [are] part of the whole system affecting the daily classroom practices [and they are in a] continuous and inevitable tension” (p. 479). Notably, this study, like Brown et al.'s (2011) study, focused on assessment specifically, not the full range of data or information that teachers may use to inform instruction.

Teachers’ beliefs play out in other ways that affect data use. The absence of teacher buy-in was found to limit Dutch teachers’ use of data in Schildkamp and Kuiper’s (2010) study. Lack of buy-in was associated with teachers’ belief in an “external locus of control.” In other words, according to one teacher, students’ achievement could be understood by whether you had a year of “good students or not so good students” (p. 488). Having data therefore would be viewed as not helpful. So too, some teachers in Pierce and Chick’s (2011) study felt that the data reported to them merely indicated what they already knew about students.

Cultural and structural factors influencing teachers’ beliefs

Trust is a major factor influencing teachers’ beliefs (Tschannen-Moran and Woolfolk-Hoy 2001). Educators in Marsh’s (2012) review of interventions designed to support data use found that teachers disengaged when they feared that their personal identity would be exposed; even when assurances of anonymity were given, teachers expressed concern. They were often afraid that district leaders would use the data for evaluative purposes, and as a result, they did not trust or feel comfortable with data discussions. Nelson and Slavit (2007) reported that trust was an essential disposition that dictated teachers’ data use, particularly if teachers were asked to expose data in a collective setting. Teachers also have concerns about data being misused or hidden to support a decision that has already been made by an administrator (Ingram et al. 2004).

Some schools take intentional steps to govern meetings in ways that protect identity and limit risks of individual exposure. When confidentiality is made a priority, educators are more likely to participate. One study documented the use of “talk moves,” strategies to help educators know “how to disagree with one another in a respectful manner” (Honig and Ikemoto 2008, p. 348) and engage in productive discussions about data. They were helpful because they were able to structure conversations that limited the “risks” of collectively analyzing and critiquing instructional practice (Marsh 2012, p. 13).

District administrators in Datnow and Park’s (2014) study worked very hard to build trusting relationships and to create an atmosphere of data use that was non-threatening. They were clear that teachers would not be evaluated or penalized for their students’ performance on assessment, but rather that teachers should use data as a tool for seeing where growth or change were needed. Similar efforts occurred at the site level. A principal explained that when she initially came to her site, she had to present the use of data in a positive manner so that teachers understood that she was not trying to “single out” or “criticize” people. People were defensive when confronted with data and so she began framing it as an indicator of how the school as a team needed to improve. “Reculturing” the school in this way took time.

External policy demands are also a key factor influencing teacher beliefs about data use. Remesal’s (2011) study reminds us of the importance of the policy context in understanding teachers’ beliefs about assessment. Spain’s new educational reform plan and the external assessment policy demands that it placed on teachers (the move from a basic plan of education to a compulsory education for secondary students) meant that students’ promotion decisions and career paths were heavily reliant on assessment data. Increased pressure to report quantitative data caused teachers to view assessment as an instrument of one-way communication with families and students—a way to report on students’ progress, but also a process that was viewed as “disassociated from their teaching duties” (p. 477).

In the US, teachers’ beliefs about data use are sometimes conflated with their beliefs about high stakes accountability policies. Some teachers confuse data use with accountability and erroneously assume that the two must be linked. In a school in Datnow and Park’s (2014) study, some teachers who held negative beliefs about data use closely associated it with No Child Left Behind. When student learning was narrowly defined in terms of performance on state mandated multiple-choice tests, teachers often felt the tests were geared towards simple recall, rather than critical thinking. Other teachers in the study, who took a more broad view of what counted as “data,” acknowledged that data “opens your eyes more” or helps teachers “avoid shooting darts blindfolded” (p. 53). In all cases, teachers acknowledged that data did not tell them everything they needed to know in order to help students be successful. Their informed professional judgment was essential as well.

As we noted earlier, most training efforts related to data use focus primarily on accessing data and do not specifically address teachers’ beliefs systems. Even when suggestions for training for data use are quite comprehensive (Hamilton et al. 2009), they often do not include attending to teachers’ underlying beliefs about data use, perhaps because this is assumed to occur within the context of structured collaboration or coaching. Indeed in Lachat and Smith’s (2005) study, when data teams developed clear questions to focus their collaborative examination of data, this provided a space for school personnel to consider how teachers’ beliefs (and school policies and classroom practices) might be affecting student achievement. When data revealed false assumptions or beliefs, the collaborative space provided a forum for addressing them. Along similar lines, the Data Wise project at Harvard works to cultivate three habits of mind for using data wisely (Bocala and Boudett 2015). In doing so, they attempt to actively confront, educators “bad habits,” including rushing through the data use process, jumping to quick conclusions, and having narrow conceptions of what counts as data. However, it’s unclear how teachers’ beliefs may shift with such efforts, as studies tend not to document belief shifts.

These are but a few examples of the ways in which cultures and broader policy structures influence teachers’ beliefs about data. When we examine the research we have presented on teachers’ beliefs alongside the research on capacity building efforts, we see that they are inextricably linked. The general lack of capacity many teachers feel regarding data use, as well as their beliefs about the utility of data and concerns regarding accountability, figure strongly into their efforts to use data to inform instruction.

Conclusion and implications

In many schools across the globe, teachers have access to a wide variety of data that could be used to inform instruction. Turning these data into usable knowledge for instructional improvement is not so easy, however. The skills that teachers need go beyond knowledge of assessment and of data analysis, as data use is not limited to assessment or even numeric data alone. The knowledge and skills that teachers require in order to use data effectively are broad and embedded in the craft of teaching itself (Gummer and Mandinach 2015). Analyzing patterns in student achievement or understanding an individual student’s strengths and weaknesses also necessitates that teachers deeply understand the subject matter, the curriculum standards, and how students learn. Teachers draw on their professional wisdom in making sense of data.

As this literature review has revealed, teachers’ capacity to use data is developed primarily in collective spaces, as teachers are frequently grouped together for structured collaboration focused on data use. Commonly, teachers engage in these structured collaboration opportunities with other teachers from their grade level and/or subject area. In some cases, teacher collaboration for data use also involves principals, instructional coaches, university researchers, or consultants who serve as facilitators. When these individuals frame data use as part of a cycle of inquiry and careful reflection (rather than simply being about accountability) and are able to contribute deep knowledge of instruction and data use themselves, it is much more likely that their involvement has a positive impact on teacher capacity building. A climate of trust among all individuals is also critical, yet can be slow to develop given the fear that exists among some teachers about data being used against them or their students.

There is a large body of work suggesting that despite these and other efforts at capacity building, teachers often feel unprepared to engage in data use. Training for data use is often limited to information on how to access a data management system. Professional development linked to data use efforts very rarely aim to expand teachers’ repertoire of instructional strategies and their skills at instructional differentiation. Data use is intended to evolve more fine-grained information about student achievement that will allow teachers to address students’ individual needs, but training efforts often do not provide methods for doing so. Thus, while the teachers may develop the skills to access and make sense of data, they may lack knowledge of how to adjust their instruction. It is often presumed that this knowledge is shared when teachers work with their colleagues in collaborative groups, however, whether this happens depends a great deal on the instructional expertise within the group and whether there is sufficient time for delving into instruction. Often meetings focus primarily on examining the data, and the question of “how might I teach this concept is another way so that more students are able to learn?” gets short shrift. The instructional component of the data-driven decision making process is essential if we are to realize the benefits of this educational reform.

Teacher belief systems are frequently unaddressed in educational reform efforts, including data-driven decision making. We tend to think that new belief systems will follow when teachers implement new practices, curricula, or policies, and tend not to address teacher beliefs a priori or even in tandem with such changes. When teachers become involved in data use, they bring with them pre-existing schemas about teaching and learning, assessment, data, and data use. Sometimes these belief systems prevent teachers from seeing the utility of data use, or conversely they may predispose teachers for being excited about using data, depending on their prior experiences. Data use efforts in districts and schools typically seek to engage all teachers, but they often do not account for—nor address—the wide range of beliefs that teachers hold about data, data use, assessment, and instructional change.

This review of research reveals that teachers’ beliefs about data use are shaped within their professional communities, in training sessions, in their interactions with coaches, principals, and facilitators, and from their previous experience with data and with reform more generally. Teachers’ beliefs are also shaped by the cultural and policy contexts in which teachers work. Leaders play an important role in helping teachers understand that all evidence of student learning, including insights from teachers’ professional judgment, is critical in informing instruction. It is also vital to decouple data use and external accountability mechanisms. As long as data use is associated with external accountability, teachers will distrust it and experience it as yet another top-down initiative. Thinking broadly about what constitutes “data” can help facilitate this change. This shift may also enable data use to shift from being a reform in which teachers perceive themselves as the subjects, rather than the drivers. In order for this to occur, teachers will need to embrace the idea that drawing on evidence on student learning is an essential ingredient to planning high quality instruction.

Teachers’ beliefs and capacity are at the heart of the connection between data and instructional change. Focusing capacity building efforts on exploring teacher belief systems and expanding teachers’ tool box of instructional strategies could provide significant leverage in implementing data driven decision making. After all, unless reforms address the core processes of teaching and learning in the classroom, school improvement is unlikely (Elmore 1996). This is particularly important as we implement the goals for teaching and learning that accompany the Common Core Standards in the US and move towards twenty first century learning across the globe.

Although there are many lessons from the research reviewed here, this analysis also uncovered a dearth of research on the intersection of teachers’ capacity building and beliefs about data use. More research is needed that is focused on efforts to build capacity in combination with deliberately attending to teachers’ beliefs, attitudes, and perceptions about data use. A variety of methods could be used for such an inquiry, but such studies almost certainly require capturing change over a period of time. A critical component of this research should be documenting change as it unfolds, rather than simply asking teachers to reflect on past experiences and process. This will help improve our understanding of how teachers’ beliefs and capacity unfold in the course of a data-driven decision making initiative. These findings could have important implications for future educational reform efforts.

Future research should also focus more on places where capacity building efforts have proven to be successful, as we currently have far more knowledge about the challenges than we do about the successes. In part this is because many of the schools and districts in the US that have engaged in data use efforts have had limited resources to devote to capacity building, and thus efforts tend to be more limited. It would be very instructive to carefully document the impact of a comprehensive effort that is designed to address teachers’ beliefs and capacity for data use. An international comparative study of teacher beliefs and capacity building would also yield crucial insights, as it would allow for an examination of how broader policy systems may serve as facilitators or hindrances to the goals of data-driven decision making.