The current issue of Educational Assessment, Evaluation and Accountability is the first to be issued under leadership in transition. At the beginning of 2016, we took over as Editors-in-Chief. We would like to acknowledge the contributions of Dr. Karen Edge as the former Editor-in-Chief who took over from Professor John Macbeath and Professor Lejf Moos, and the history of the journal before that. We also want to express our gratitude to the Springer Publishing team, the editorial board, as well as reviewers and authors for the collective efforts in making the journal a well-received one. Educational assessment, evaluation, and accountability are still evident in educational practice and policy making in virtually every country (MacBeath & Moos 2009). We look forward to collaborating with you all in order to enrich the discourse in these three areas which are so vital to the health of educational systems and the children those systems are entrusted to serve.

During the last two decades, educational policy and several waves of public sector reform have raised expectations among parents, school board authorities, and the general public about what schools should achieve. Many countries have attempted to modernize education by implementing leadership and management structures and processes emphasizing performance management and accountability arrangements (Gunter et al. 2016). Parallel to this, we have also witnessed a growing global movement toward the so-called evidence-based policy and practice which in many countries sped up by the turn of the new millennium due to the attention around the PISA-results. Student test scores, educational standards, competition, and benchmarking have increasingly challenged teachers and principals by taking center stage as the key drivers for improvement (cf. Fuller 2008). Much faith has been put in the assessment tools that generate data on effectiveness and efficiency, and educators on different levels in the schools system are expected to use this data to enhance teaching and learning in order to raise student performance (Skedsmo 2009, 2011).

Data use can be defined as what happens when individuals interact with test scores, grades, and other assessment tools (Coburn and Turner 2011; Spillane 2012). Until now, most of the studies in this area have been conducted in North America, but data use is a growing international research topic and already a disputed area (Prøitz et al. 2015). On one hand, many studies suggest that focusing on the data around student performance encourages collaboration among teachers in order to improve practices as well as justify action towards important stakeholders (e.g., Datnow 2011). On the other hand, a substantial body of literature blames data use in certain accountability contexts for shifting teachers’ perspectives away from a comprehensive approach to teaching and learning to strategies which promise quickly raised test scores (e.g., Valli and Buese 2007). It can be argued that data provided by the standardized tests embody particular representations of students’ learning outcomes which enable users to see some aspects related to teaching and learning processes while other aspects are constrained (Spillane 2012). Moreover, the type of accountability practices which is tied to outcomes achieved influences on what kind of professional learning and development that can take place when teachers and school leaders interact with data (Mausethagen et al. forthcoming 2016).

For this first issue in 2016, we have selected four articles out of the pool of accepted papers which address data use, each of them providing different perspectives and new insights.

Sun, Przybylski, and Johnson conducted an extensive review of the literature on data use published during the last 14 years. Many of the studies were conducted in North America. The authors identify different factors behind the success and failures of data use among teachers and the need for principals to support developing data-wise cultures.

Jerome de Lisle reports on emerging data-use policy in Trinidad and Tobago. He shows that much of the data on student performance gathered over the last decade provides insight into variation in school performance and issues of inequality. He also problematizes the scarcity of actions taken on the system level to address a lack of data-driven inquiry amid multiple sources of evidence.

Curry, Mwavita, Holter, and Harris report on a case study on data use in a school district in the American midwest. The authors claim that current high-stakes accountability connected to standardized testing is in danger of demotivating teachers and preventing data-informed decision making. They argue for complementing standardized test data with a teacher-centered formative approach to build capacity for effective data generation, collection, and utilization.

Similarly to the article by Curry et al., Jo Beth Jimerson argues for the need to align and make sense of standardized testing through existing knowledge and “lived” experience. She reports on an instrument developed and piloted to collect information about teachers’ attitudes to data use. The purpose is to help educational leaders develop data-informed practices which take contextual issues into account.

All four articles report on challenges practitioners have with connecting “externally produced” performance data with “their” professional knowledge and experience to improve teaching and learning. This strengthens the argument that the use of standardized test data has more of a controlling and accountability function, which does not necessarily promote improvement of professional practices. It implies that teachers need to draw on additional sources to understand and develop their practice in ways that enhance student learning, as also suggested by Curry et al. in this issue.

All authors describe ways to move the work on data use forward, where one important direction suggested clearly implies stronger involvement of practitioners, namely school leaders and teachers. We conclude that there is still a need for more knowledge about what actors on different levels actually do under the broad banners of data use, especially since teachers in many studies often are framed in a role of implementing or delivering best practice based on “evidence,” and not as professionals who rely on different types of data and knowledge sources to make professional choices (Prøitz et al. 2015).

We also need more knowledge on types of data use which improve educational practices in a broader sense and the interplay with governing processes and different forms of accountability. At the same time, it is important to critically question the amount of data and the kind of data which is needed for what purpose and to investigate how the feedback of data is aligned with and embedded in further work on improvement (on organizational and system level). Last but not least, research is needed about the cost-benefit relation, i.e., (all kinds of) costs of test and data production related to the impact on improvement practices and school outcomes.