Across the United States, the press to become “data-driven” has stretched from policymakers to district leaders to campus leaders to teachers and their students. The theory of action seems to be that if educators can readily access a variety of data related to student needs, they will then make instructional changes needed to address the learning needs of students and narrow achievement gaps (Datnow et al. 2007; Marsh et al. 2006). Indeed, there is evidence that under the right conditions data use can catalyze improvements in teaching and learning; that is, in a nonthreatening culture of inquiry, educators can collaborate around data in ways that support improvement efforts (Louis et al. 2010; Marsh et al. 2010). However, educators sometimes engage in practice in ways that do not align with a strong evidence base (Coburn et al. 2009; Farley-Ripple and Cho 2014).

Our review of the evidence around direct involvement of students in collecting, tracking, and utilizing their own data—what we term “student-involved data use” (SIDU)—suggests just such a gap between research and practice. To be clear, our stance is neither that this practice is effective nor that it is ineffective, but that the field is in need of evidence that indicates when and under what conditions SIDU may be beneficial to teaching and learning processes. This paper narrows this research-practice gap by mapping the terrain on SIDU. We provide background on the evolution of the practice, outline common components of the practice as derived from research, social media/internet sources, and training materials, and posit avenues for inquiry to build the evidence base for a practice we perceive as teeming with potential as well as risk.

Background

Data use in and of itself is not new; in much of the United States, state-level accountability policies have leveraged formal data use for nearly three decades (Hamilton et al. 2009). The use of formative assessment—that is, the practice of providing timely, specific, and constructive feedback to better support the process of learning—has been studied for well over three decades (Black and Wiliam 1998; Hattie 2011; Reeves and Flach 2011; Shute 2008; Stiggins 2005; Wiliam 2011). And, involving students in various aspects of goal-setting in connection with such practices is not only part of formative assessment (Stiggins 2005; Hattie 2011; Heritage 2010), but was recommended explicitly in the Institute of Education Sciences practice guide for using data to improve student achievement (see Hamilton et al. 2009). What is new to the landscape, but growing in popularity, is an apparent offshoot of such work—SIDU.

We define SIDU as the practice of direct involvement of students with data analysis and planning in very specific and formalized ways toward an end of improved academic outcomes. In SIDU, the student is engaged in tracking his or her data in a personal tracking system, sometimes referred to as a “student data folder” (Jim Shipley & Associates, JSA 2012b; Park et al. 2013). Students work with teachers to analyze their data, establish personal learning goals, and determine effective learning strategies. They then monitor progress toward these goals, often using graphic displays (semi-public as well as private; individual and, in some cases, aggregated by class). Students may also facilitate parent–teacher–student conferences by discussing their data, learning goals, and progress (JSA 2012a, b).

This particular iteration of involving students seems to be increasingly popular among practitioners: for example, an internet search for “student data folder” produced just over 1.1 million results, including numerous Pinterest pages where ideas for collecting and displaying student data in the classroom are exchanged among teachers and school leaders (see Courtney 2014, for one example). Websites of school districts as well as providers of training evidence widespread idea sharing and implementation of SIDU strategies in multiple states (e.g., Hurst-Euless-Bedford Independent School District, HEBISD 2011; JSA 2013; Martin 2011; Montgomery County Public Schools, MCPS 2010). And, three recent studies (Kennedy and Datnow 2011; Marsh et al. 2014; Park et al. 2013) documented the work of teachers engaging in SIDU across districts and states.

Despite such evidence of popularity among practitioners, the research base is decidedly lacking. To date, there has been little in-depth analysis that explores the theory-of-action behind student-focused data-driven decision making (DDDM) strategies (Marsh et al. 2014, is a notable exception). The few studies that do address SIDU are predominantly descriptive, as opposed to examining the nuances of implementation or of when and under what conditions these practices result in benefit to students. At present, the vast majority of research focuses on what adults do with student data (see Coburn et al. 2009; Hamilton et al. 2009; Kerr et al. 2006; Marsh et al. 2006; Park and Datnow 2009) rather than how data are analyzed and acted upon within the teacher–student dyad. We therefore know relatively little about how SIDU practices play out in practice or about the effects of these practices on students.

The roots of SIDU: From the boardroom to the classroom

As we have noted, in purely procedural terms, involving students with tracking and analyzing their own data is not novel. For example, athletic coaches have long worked with athletes to analyze game film or statistics to effect improvement. In a more classroom-specific context, however, the systematic engagement of students with tracking and analyzing data over time to facilitate goal-setting and achievement appears to be a more recent phenomenon with roots in iterations of DDDM as influenced by continuous improvement models stemming from business and industry.

Continuous improvement and DDDM: The accountability-data use nexus

A heightened focus on DDDM in schools can be traced to enactment of the No Child Left Behind Act of 2001 and, more recently, the American Recovery and Reinvestment Act of 2009 (Hamilton et al. 2009; Ikemoto and Marsh 2007; Kennedy and Datnow 2011; Loeb et al. 2008; Plank and Condliffe 2013). However, the roots of data use—and of student involvement in data use—stretch back beyond federal and even state accountability policies to the total quality management (TQM) movement, which gained traction in the United States in the 1950s (Hackman and Wageman 1995; Neves and Nakhai 1993; Weaver 1992). Business and industry adopted quality improvement practices involving careful attention to systems thinking, collaborative participation, and the identification of measurable outcomes (Senge 2006; Weaver 1992); education as a whole adopted many of these strategies later, largely as a result of more formal accountability policies (Park et al. 2013; see also Senge et al. 2012). This is not to say that schools were unconcerned about improvement prior to the rise of accountability policies, only to note that such policy enactment leveraged increased attention to measuring and reporting data aligned with particular criteria.

While many models for continuous improvement have been developed in the years since TQM-oriented practices gained a foothold in American business and industry (Park et al. 2013), one that has permeated the ways in which schools operate is the Malcolm Baldrige National Quality Award and the accompanying “Education Criteria for Performance Excellence” (see Baldrige Performance Excellence Program, BPEP 2015). The Baldrige Award, introduced in 1987, evolved from “a means of recognizing and promoting exemplary quality management practices to a comprehensive framework for world class performance, widely used as a model for improvement” (Flynn and Saladin 2001, p. 617). The criteria include standards related to strategic planning, resource allocation, the collection and analysis of data related to process and outcomes quality, and collaborative goal setting and measurement of progress toward those goals, among others (BPEP 2015; Flynn and Saladin 2001; Neves and Nakhai 1993). The award can serve as a catalyst for improvement as schools work through the intense application process (and in so doing, revamp internal processes), but is also broadly considered a high honor for recipients. Baldrige criteria also appear to intertwine with the development and proliferation of SIDU.

Baldrige and SIDU in practice

Practitioner-oriented materials specific to SIDU are replete with references to the Baldrige framework. Training materials used in multiple states and across multiple districts for working with students on “data folders” or in constructing “classroom data walls” explicitly reference Baldrige criteria (JSA 2012a, b). School district personnel describe using the Baldrige criteria to guide not only continuous improvement efforts, but efforts to directly involve students in personal goal setting and data tracking (Miller et al. 2009). Also, many practitioner-oriented texts provide guidance to school leaders and teachers on how to apply the Baldrige framework—including student-involvement in data use vis-à-vis “data folders” and “classroom data walls”—to the campus and classroom level (e.g., Benjamin 2000; Burgard 2009; Byrnes and Baxter 2012).

Web resources also suggest an association between Baldrige-oriented strategic planning and the implementation of SIDU. For example, several trainers at Jim Shipley & Associates, one of the foremost providers of such professional development, have in-depth experience with Baldrige criteria, having served as evaluators for the Baldrige Quality Awards program and for related Quality Awards programs (JSA 2013). Of the five district websites we examined, two used Baldrige-specific language in explaining student data folders; one provided external links to Baldrige-specific information; one used “continuous improvement” language (but referenced training by a Baldrige-oriented company); and one simply referenced “continuous improvement” in describing efforts related to student-involved data use.

Baldrige and SIDU in research

In the little research that does exist specific to SIDU, school personnel who describe efforts to directly involve students with tracking and analyzing their own data often explicitly reference Baldrige criteria, describe professional development provided by organizations that base training on the Baldrige framework, or echo language of the criteria in their rationales for involving students in data use (Banister 2002). Park et al. (2013) conducted case studies of school districts and organizations steeped in continuous improvement programs; their findings described the work of two school districts—Menomonee Falls, Wisconsin and Montgomery County Public Schools, Maryland. Leaders in Menomonee Falls described working with professional development providers to, in part, help teachers learn how to use “quality tools” such as Plan-Do-Study-Act (PDSA) cycles in the classroom with students. Leaders in Montgomery County Public Schools (which won the 2010 Malcolm Baldrige National Quality Award) reported that all principals receive training in the Baldrige criteria and that “teachers attend a Quality Academy that teaches them how to create a Baldrige-based classroom learning system” (Park et al. 2013, p. 17). In another study of a quality improvement initiative, Quebodeaux (2010) found “data binders” to be a common strategy described by principals, and the structures of the parish-wide initiative were closely linked to the use of Baldrige Education Criteria. In short, where student data folders or classroom data walls are in place, Baldrige criteria or links to Baldrige-oriented training are typically also present.

Common characteristics: SIDU, formative assessment, and special education

Though the main roots of SIDU seem firmly planted in the world of TQM and continuous improvement, SIDU does share some characteristics with practices in the areas of classroom-based formative assessment and special education instruction. In the sections that follow, we trace these connections to paint a more complete picture of this particular iteration of student involvement and how we perceive it as borrowing, but ultimately diverging from, similar practices in these areas.

SIDU and formative assessment practice

For quite some time, researchers have demonstrated that classroom-based formative assessment practices are some of the most effective ways to support improved student achievement (e.g., Black and Wiliam 1998; Hattie 2011; Heritage 2007/2010; Shute 2008; Stiggins 2005; Wiliam 2011). Within a formative assessment construct, teachers include students in defining clear learning targets, engage students in work toward those goals, and provide specific and timely feedback in ways that allow for immediate correction of error, whether that error stems from a misapplication of a process or a deeper gap in conceptual knowledge (see Wiliam 2011). This type of “assessment for learning” necessitates direct engagement with students and a process of reciprocal instruction in which the student learns from the teacher and the teacher learns (from the student and the student’s responses) how to adjust instructional strategies (Heritage 2010).

SIDU as described in this paper shares some characteristics with formative assessment practices. Certainly, learning goals and clear learning targets may be closely linked, though in some instances SIDU practices seem to incorporate the language of testing and accountability [i.e., “… to get 90 % or better on the end of unit test” (JSA 2012b, p. 14)] rather than the rich and detailed language of standards echoed in the formative assessment literature [i.e., “I can describe the processes that have been used to create, amend, and repeal laws” (Chappuis et al. 2012, p. 321)]. In fact, engagement with students and tracking of student data are also elements that may be observed in both SIDU and classrooms engaged in robust formative assessment; however, this difference in focus—specific, concrete, standards-based goals and measures versus grade- or test-score oriented targets—appears to be a junction at which SIDU and formative assessment practices intersect or diverge, depending on implementation.

Yet SIDU, as is described in the limited research available, is not synonymous with formative assessment practice. Formative assessment practice does not lean heavily on public displays of data, or even on broad-scale assessment data (i.e., state tests, benchmark exams); it is deeply rooted in standards-oriented language and in the day-to-day work and micro-adjustments of teachers and students as they build toward standards-based goals (see Chappuis 2009; Chappuis et al. 2012). That is, in formative assessment practice, a broad-scale standardized test may indeed provide a marker of progress or a starting place for inquiry about teaching and learning, but a particular level of success on the test is not in and of itself the learning goal (see Wiliam 2011).

SIDU and special education practice

SIDU also appears to share some practices in common with behavior modification and tracking practices common to special education. Sugai et al. (2000) and others have detailed the use of practices that help instructors focus on early screening, goal-setting, tailoring of instructional efforts and supports to student needs, continuous progress monitoring toward established goals, and adjustments based in timely data collection (see Maggin et al. 2011; Simonsen et al. 2008). Such practices, indeed, are cornerstones of effective Response to Intervention approaches. Sometimes diagnostic and intervention strategies include involving students in documenting their own behaviors—typically at timed intervals—to help students realize progress toward a goal or reward (Maggin et al. 2011). However, a key difference in these practices and what constitutes SIDU as we conceptualize it is that these behavioral tracking practices are not meant to continue ad infinitum; that is, they address the need to develop (or extinguish) a particular behavior with the ultimate goal of being able to discontinue the intervention at such time at which the student can successfully function in a less restrictive context. SIDU, by contrast, aims to instill in students an enduring habit of mind to engage in goal-setting, strategic planning to achieve those goals, and reflective self-monitoring. In this vein, SIDU in the general education context is a departure from the individualized supports piloted and developed in the special education realm. This focus on strategic planning also situates SIDU more squarely in the tradition of continuous improvement or DDDM models.

In sum, SIDU as a structured data use practice seems to have evolved from a milieu that overlaps with certain practices that are themselves steeped in a rich evidence base. Providing students timely and concrete feedback toward clear, standards-based learning targets is a hallmark of good formative assessment practice (Black and Wiliam 1998; Chappuis 2009; Chappuis et al. 2012; Heritage 2010; Stiggins 2005; Wiliam 2011). Helping students track their progress to increase their sense of agency is an established component of special education instruction (e.g., Maggin, et al. 2011; Simonsen et al. 2008). However, the current practice of SIDU—from processes to tracking systems to reporting methods—seems most deeply rooted in and influenced by a “trickling down” of business and industry style DDDM approaches from the boardroom to the classroom (Marsh et al. 2014).

Compiling the evidence on SIDU

To better understand this emerging phenomenon, we scanned practitioner and peer-reviewed literature in multiple areas. We searched for peer-reviewed research and other material from scholarly sources (e.g., white papers, edited books, or reports issued by educational or governmental entities). We conducted internet searches for practitioner-oriented literature (e.g., how-to guides, as well as websites of entities providing technical assistance or training to practitioners). We also culled the internet for school district websites that addressed issues related to SIDU. Last, we scanned social media to determine how ideas pertinent to SIDU were being exchanged by teachers or school leaders.

We used the keywords “student data folders,” “student data binders,” and “classroom data walls” to identify potential materials for review. We excluded results that were specific to technological products as well as those specific to adult learning contexts. Drawing from our own backgrounds in school leadership, we anticipated some overlap between SIDU and strategies commonly used in special education; we therefore included literature specific to the use of behavior- or academic-tracking practices within special education contexts. Using these parameters, our search in the scholarly literature returned 27 hits.

Results amongst practitioner literature and in general internet searches (i.e., Google) were more plentiful. A search of Pinterest boards returned 77 hits; we selected five of the most popular, as determined by the number of “followers” for each board, for closer examination. Google searches for our terms returned over 300,000 hits; we note that a scan of the first several pages of results suggested that most of these were images (educators sharing pictures of their data walls or student tracking forms), compliance documents (e.g., campus or district improvement plans), individual educator webpages, and campus resource documents. We selected five general school district pages that provided information specific to classroom continuous improvement efforts to illustrate the types of information shared by school districts with parents and the community around these issues. Table 1 provides a list of the sites we selected to inform our exploration.

Table 1 District and internet/social media sites

Theoretical perspective

Our approach to this issue stemmed from our interest in work related to achievement goal theory (Blackwell et al. 2007; Dweck 2006; Meece et al. 2006; Pintrich 2003). Achievement goal theory speaks to how learners approach tasks and receive feedback in the learning process. Some learners—those with a “growth” orientation—experience failure or mistakes as inherent parts of learning, as temporary setbacks that can be overcome with persistence (Blackwell et al. 2007; Dweck 2006; Pintrich 2003). Others—those with a “performance” orientation—experience feedback as verification that they are inherently “smart/not smart” or “good/not good” at a certain task. Thus, a child with a performance orientation might experience a “D” on a science test as evidence that she is “bad at science” and withdraw from subsequent challenging experiences, whereas a child with a “growth” orientation would experience such feedback as a routine part of the learning process useful in helping reshape learning strategies. Achievement goal theory suggested to us that similar feedback, offered by the same teacher, may result in deeper engagement in learning and increased motivation for student A (growth orientation), but task avoidance and reduced motivation in child B (performance orientation) (Dweck 2006; Pintrich 2003). That is, two SIDU processes might look very similar on the surface (data binders, data walls, tracking data) but play out very differently depending on the learning orientations of the students and the framing of the process by teachers.

Research suggests that these goal orientations are themselves malleable—that learners can be prompted to shift towards a “growth” orientation if teachers structure feedback in ways that focus on improvement- and effort-oriented language, rather than in ways that highlight competition and peer comparisons (Blackwell et al. 2007; Cauley and McMillan 2010; Dweck 2006). Therefore, this perspective suggests that educators who are intentional about using growth-oriented language and prompting in the classroom could help students engage in the use of personal data in constructive ways. Unfortunately, it also suggests that educators who are haphazard about how they structure SIDU may unwittingly reify negative self-perceptions among students who approach learning steeped in a performance orientation. This perspective leads us to believe the stakes are high in learning more about when and under what conditions SIDU can best benefit student learning and development.

SIDU: In the classroom and throughout the school

Although the literature and our scan of district websites suggests a link between the language of continuous improvement and SIDU, the picture of what teachers actually do when they involve students with data use tends to be murky. We therefore examined practitioner literature as well as research to learn more about what the process of SIDU is intended to entail, and how the process actually unfolds.

SIDU in practice: Common features

To better understand how SIDU is intended to unfold in practice, we examined district and social media websites, practitioner-authored reports, and training materials.Footnote 1 As we read each piece, we made note of the main features, then compared these to the features noted in the next pieces, until a consistent constellation of features emerged. Though practices may vary slightly from context to context, this constellation tended to include: (1) a structured, nested inquiry cycle—usually the Plan-Do-Study-Act cycle—guided by specific, measurable goals; (2) the use of student data folders/binders; (3) a focus on guided reflection specific to learning strategies; (4) semi-public displays of data; and (5) data-informed parental involvement.

Plan-Do-Study-Act

Teachers engaging in SIDU are tasked both with teaching the Plan-Do-Study-Act (PDSA) cycle (a variant of action research) and with modeling this cycle in how they manage their own classrooms (HEBISD 2011; JSA 2012a, b; Miller et al. 2009). They are expected to work with students to set classroom-level goals related to learning standards and to subsequently engage in instructional planning that provides pathways to student mastery. “Quick checks” and benchmarking processes are aimed at monitoring the progress of individual students and groups, and teachers are to analyze resultant data for trends that inform adjustments to future instruction (JSA 2012a, b). While teachers go about this PDSA themselves, they are also to facilitate a similar, nested process with students. That is, they help students write individual learning goals related to the collaboratively-established classroom goals. These goals are typically referred to as “SMART” goals—i.e., those goals that are “Specific, Measurable, Aligned to learning requirements, Results-focused,Footnote 2 and Time-framed” (Dunlap Community Unit Schools, DCUS 2013; JSA 2012a).

Student data folders

SMART goals are subsequently tracked, with data being maintained in student “data folders” or “data binders.” Teachers are expected to facilitate the construction, maintenance, and use of these data folders, which are promoted as tools to “…support students in becoming co-producers of their learning” (MCPS 2010, n.p.). After establishing personal learning goals and engaging in learning activities, teachers help students create data folders in which the students construct graphs and charts detailing progress toward these goals (Dunlap Community Unit Schools #323 (DCUS) 2013; JSA 2012a, b; Martin 2011; Rhim 2011). Data folders become a critical component of SIDU, as students maintain evidence of progress toward learning goals across the instructional year. Information in data folders is shared with school leaders and with parents of students during periodic conferences (Miller et al. 2009). The use of data folders was clearly a focus of teachers who used Pinterest to share ideas about classroom-level continuous improvement: items “pinned” onto virtual bulletin boards frequently included photos illustrating ways teachers had students track their own data (see Courtney 2014).

Guided reflection

Upon helping students graph their own data, teachers are expected to guide students through a reflective process (sometimes referred to as a “plus-delta” process) in which students identify strengths and weaknesses in the targeted area, and through which they identify which learning strategies worked well and which need to be adjusted in future efforts (JSA 2012a, b; MCPS 2010). This process is aimed at supporting present learning, but also towards helping student develop lifelong patterns of goal-setting, planning for desired outcomes, and making adjustments based on thoughtful reflection on progress (or lack thereof).

Semi-public displays of data

Teachers engaging in SIDU are expected not only to help students monitor their own data, but also to create semi-public (e.g., classroom or hallway) displays of data that evidence progress toward goals (DCUS 2013; HEBISD 2011; Martin 2011). Such displays are intended to be informative—by providing normative data to students and others in the school community—and motivational—by introducing elements of competition amongst classes or groups of students. These public displays—often referred to as “data walls,” “data rooms,” or “data-dashboards” in a non-technological sense—are considered mechanisms for monitoring progress and informing parents and school visitors as to the state of instruction at a campus (Miller et al. 2009; Rhim 2011). Much like the data folders, this was an area of focus for those who exchanged ideas vis-à-vis Pinterest: photos of classroom displays of data, hallway bulletin boards, or charts tracking whole-class or student progress by group were abundant.

Data-informed parental involvement

A last feature common throughout practitioner literature is the intent for SIDU to enable parent–teacher–student communication that allows students to take a visible role in explaining the student data folder to their parents (DCUS 2013; MCPS 2010). This puts students in a leading role in terms of describing learning activities, strategies, and outcomes. According to practitioners and district websites, it also affords students “ownership” of their data and learning (2013; Martin 2011; Montgomery County Public Schools (MCPS) 2010).

Practice through the lens of research: What’s the evidence on SIDU?

Evidence of practitioners engaging in SIDU was plentiful. Taking into account the volume of web posts, Pinterest posts and exchanges, training materials, and writing by practitioners for practitioners, it appears that for many school districts, SIDU has become a standard component of continuous improvement practices. In stark contrast, there exists a paucity of research on how these practices actually unfold. In fact, when we excluded research on the use of self-monitoring practices among students with special needs, we found few studies specific to SIDU. The literature that does exist falls into one of two broad categories: (1) research focused on educational data use broadly, but which mentions SIDU tangentially; and (2) descriptive research that outlines how SIDU unfolds in particular contexts.

Research related to SIDU

The more plentiful body of research is that which seeks to describe educational data use broadly, and in doing so mentions SIDU as an associated—but not central—theme. These studies set out neither to examine how SIDU unfolds in practice, nor to explore nuances of how SIDU is implemented, but they do provide some insight into the practice. The most wide-reaching of these studies is likely the Institute of Education Sciences practice guide for educational data use (Hamilton et al. 2009), which assesses multiple aspects of DDDM through the lens of the what works clearinghouse (WWC) standards. Upon examining evidence related to DDDM for practice, the authors recommended that practitioners “Teach students to examine their own data and set learning goals” though they acknowledged that, according to the rigorous standards of the WWC, the extant evidence for this recommendation was low (Hamilton et al. 2009, p. 8).

Despite limited research on SIDU, the panel posited that if students were involved in setting learning goals, “Teachers can then use these goals to better understand factors that may motivate student performance and can adjust their instruction accordingly” (Hamilton et al. 2009, p. 8). However, the evidence used in support of this recommendation drew heavily from studies on classroom-based formative assessment (e.g., Black and Wiliam 1998; Clymer and Wiliam 2007; Hattie and Timperley 2007). As we have noted, robust formative assessment processes and SIDU share similarities but are not identical processes. The timely content- and process-specific feedback provided to students as described by formative assessment researchers (see Black and Wiliam 1998; Clymer and Wiliam 2007; Wiliam 2011) seems quite removed from some descriptions of SIDU in practice, especially when student learning goals are vague. For example, in one set of training materials, a student is described as charting progress toward the goal, “…to get a 90 % or better on the end-of-unit test” (JSA 2012b, p. 14). Formative assessment research may indeed prove a useful guide for implementation of SIDU, and a valuable framework for further examination of the practice, but it seems premature to use this particular body of literature as the main source of evidence supporting the practice of SIDU.

Research focused on SIDU

In most DDDM research, students are those who produce data, not those who analyze or act upon it (see Copland et al. 2009; Valli and Buese 2007). And, in a handful of other studies, we learn that students are involved in data use in some way–for example, through a description of methods that includes review of “student data folders”—but not how the students are involved, or what characterizes the conversations between teachers and students around those folders (see Banister 2002; Park et al. 2013; Stuart and Rinaldi 2011; Pollock 2013; Quebodeaux 2010). Therefore, the unfortunate reality is that while ample research exists on educational data use (e.g., Hamilton et al. 2009; Louis et al. 2010; Park and Datnow 2009; Ikemoto and Marsh 2007; Kerr et al. 2006; Mandinach 2012; Marsh 2012), this research has thus far not addressed the nuances of SIDU in any depth.

In fact, we could locate only two close examinations of SIDU. In the first, Kennedy and Datnow (2011) studied the ways in which students were meaningfully involved in reform efforts and, specifically, DDDM-oriented reform efforts, at eight schools considered exemplars of data use. The authors found evidence of student involvement in data use in some form in each of the schools. They further drew from study data to create a typology describing the depth and complexity of how students were involved in data use:

  • Tier I involvement included “dialogic student involvement in reform planning and implementation,” which the authors describe as the “most comprehensive type of involvement” (p. 1255). In this tier, the authors noted that students routinely provided feedback to teachers through course evaluations and through consulting-style conversations focused on instructional strategies.

  • Tier II was described as schools and districts using data “… to target student engagement within the context of the learning environment” and included assessments of student engagement. At some sites, students participated in “rounds” or “data walks” side-by-side with teachers and school leaders (p. 1257).

  • Tier III involvement focused on “encouraging students to respond to data by becoming more engaged in their own learning and achievement” and was observed at seven of the eight study sites. Students were asked to reflect on personal data and to come up with plans for improvement.

Tier III practices most closely resemble those described in practitioner-oriented literature on SIDU, and provide further evidence that “data chats” and engaging in goal-setting, reflection, and self-monitoring of data are increasingly common practices in schools.

The second study (Marsh et al. 2014) focused on how middle school teachers engaged students with data use; in this study, the authors examined the phenomenon of SIDU through a lens of achievement goal theory, and assumed a more questioning—rather than solely descriptive—stance. For instance, findings indicated that many teachers reported using “data folders” and public displays of data because they thought doing so would motivate students to improve by catalyzing increased effort. While this is not an explicitly stated goal for involving students in most practitioner literature, it does seem to fit with the assertion that student involvement helps students “own their own learning” (JSA 2012b, p. 8). Marsh et al. noted:

Teachers and administrators believed if students saw their data, then they would work hard, take seriously the assessments, and invest in their own learning. Although implicitly educators may have believed the added motivation would lead to learning, they first and foremost emphasized the effects of data use on student motives (2014, p. 12).

The authors also noted that one of the teachers who asserted that examining data (via data folders or in semi-public postings on data walls) would catalyze greater effort on the part of students also reported being “embarrassed” and “stressed” when her (teaching-related) data were posted in view of other employees as a part of a discussion.

The Marsh et al. (2014) study is valuable to the field for a number of reasons. First, it provides more data on exactly what teachers and school leaders are doing specific to student-involved data use—as opposed to what training or recommendations suggest teachers and school leaders do. The piece provides a “reality check” that helps tell the story of how SIDU unfolds in different contexts. Second, approaching the issue from a perspective informed by achievement goal theory allowed the researchers to classify their observations of student involvement in practice: over two-thirds of the examples captured were determined to promote performance—not mastery—goal orientations. Viewed through the lens of achievement goal theory, this suggests that the ways in which SIDU was implemented may well have reified negative self-images as learners for some students, or caused them to push back from challenges, rather than ramp up effort. That is, the ways in which SIDU actually unfolded in practice may have had the opposite of the intended effect for students who came to the tasks with a performance, rather than mastery, orientation.

Third, the study brings to light the positive potential for SIDU when implemented in alignment with what the field already knows about promoting a mastery orientation: that is, when data are personal, measured in comparison to standards (as opposed to other students), and when effort and improvement are valued as much as the score on the summative exam, students can not only move toward desired outcomes but may also reap the benefits of increased internal motivation. Finally, the authors suggest that where structural supports were in place to facilitate constructive data use among adults (e.g., data coaches, literacy coaches, or professional learning communities) more mastery-oriented approaches to SIDU were evidenced.

Conclusions: Heavy on “Data,” light on “Evidence”

In sum, teachers appear to be increasingly expected to engage students in tracking and analyzing data, and to apply within their classrooms a DDDM-oriented process that has traditionally been applied to organizations and to adults. Although training materials and practitioner writings outline a process for involving students with data, the paucity of research on this phenomenon means that we know relatively little about how this process unfolds in everyday practice. What we can take from the research at present is that the process may unfold in ways that promote a mastery- or performance orientation among students. Descriptions of the process suggest that some teachers engage students in ways that promote mastery; however, others involve students in ways that send mixed signals or that promote a performance-oriented approach to learning, which can de-motivate some students (Marsh et al. 2014).

Implications: Bridging research to practice, and practice to research

As we reflected on the ways in which the literature suggests SIDU has evolved and is playing out in schools, we see several implications for bridging research and practice. Here, we describe the path forward in four stages: (1) taking stock of the current practice-research disconnect; (2) understanding the past and acknowledging the present; (3) attending to theories-of-action; and (4) moving forward through improved collaboration.

#1: Practice is ahead of research: For worse or better

The districts across the United States are engaging trainers and consultants specific to SIDU and are sharing information on the practice and importance of “classroom data walls” and “student data folders” on websites and social media suggests that SIDU is becoming a fairly common practice. From the multiplicity of internet hits for these terms and the numerous social media pages devoted to sharing strategies around SIDU, we conclude that SIDU is growing in popularity as a school improvement strategy, albeit one that has gone largely unexamined in its present iteration. This highlights a chasm between the worlds of research and practice.

Our stance is that, in the ideal, early childhood, primary, and secondary settings and academia ought to work toward improvement in educational opportunity for all children in symbiotic fashion. In the world of practice, educators should engage in inquiry-based continuous improvement efforts whereby they examine best practices and research, try out new strategies, measure results, and make adjustments toward improvement. In the ideal world, practitioners share this knowledge with researchers and with other practitioners, so that constructive strategies can “bubble up” and be dispersed throughout the field. And, those in academia ought to do what few practitioners have time to do: step back from the everyday demands of working with buildings full of children and the complexities that come with them to engage in deep, meaningful research on particular aspects of practice. Academia needs the practitioners to act—thoughtfully and in light of evidence—and practitioners need academia to provide the kinds of in-depth study and evaluation of strategies, in-school and out-of-school factors, and associated issues that influence the ways in which educational opportunity unfolds for children in varying contexts.

In the case of SIDU, reality seems far from this ideal. Practitioners have taken the sound body of evidence on formative assessment practices, paired it with the evidence on continuous improvement (in- and outside of education), and determined that the intersection of these practices provides a useful approach to working with students. They have shared ideas, published work (mainly in the process external to peer review), and have greatly contributed to the spread of the ideas and work in a relatively short span of time. However, this body of practitioner-led work seems to have overlooked the ways in which feedback can affect students differentially depending on the students’ mindsets regarding achievement. There is little evidence that practitioners have engaged in systematic and in-depth evaluation of such practices, or have shared these evaluations across district lines. And, there is little evidence that practitioner-leaders have examined the ways in which teachers implement the training they receive pertinent to involving students in data use, and how it aligns with the broader intent of the training. If this work has been done, it has not been shared in ways accessible to the broader field.

The response of research has been similarly insufficient. The field has amply and ably described best practices with regard to leading for data use (Hamilton et al. 2009; Ikemoto and Marsh 2007; Mandinach and Jackson 2012; Wayman et al. 2012). The field has done a thorough job of examining systemic and central office-level issues related to effective data use (Coburn et al. 2009; Park and Datnow 2009; Marsh 2012; Wahlstrom et al. 2010; Wayman and Jimerson 2014). The field has attended to what teachers do when they analyze their own data and what challenges teachers encounter when they try to engage in data-informed practice (Jimerson and McGhee 2013; Mandinach 2012; Means et al. 2011; Valli and Buese 2007; Wayman and Stringfield 2006). The field has even worked to describe what can go wrong when data use is not approached in constructive ways (Booher-Jennings 2005; Daly 2009; Louis et al. 2010).

What researchers have not done is purposefully examine how formal data use intersects with the everyday learning experiences of students. Research has not provided knowledge regarding the nuances of SIDU in practice and the factors that may ensure a constructive, healthy process for teachers and students. When research trails practice by the degree to which it does on SIDU, then practitioners have little recourse other than to act on the information that has been introduced into the trade literature, even if that information has been neither generated through systematic investigation nor subject to rigorous peer review.

#2: Understanding the past, acknowledging the present

Understanding the history and rise in popularity of SIDU is important for two reasons. First, practitioners and researchers should understand that while these practices could have arisen from sharing of the student-centered practices of special educators, or from the application of classroom-based formative assessment practices, they more likely have “trickled down” from the accountability-centered practices of business, industry, and educational policymaking (see Marsh et al. 2014). This is not to say that these practices cannot and are not being used to benefit individual students; from the little research that exists, it seems likely that in some cases, this is exactly what is happening as teachers work in concert with students of all ages to set standards-based learning goals, establish plans of action to meet those goals, and track progress (Kennedy and Datnow 2011; Park et al. 2013). However, we should recognize that the world of practice has largely overlaid an approach developed for the improvement of systems to the improvement of individuals (sometimes very young individuals), and this may require examination of how SIDU plays out in terms of short- and long-term effects on students.

Second, recognizing the roots of SIDU may help teachers be thoughtful about how they implement data use with children. Daly (2009) suggests that when educators become overwhelmingly preoccupied with the “bottom line” of accountability policy demands, they tend to be less instructionally innovative and more rigid in their planning and assessment activities; ironically, this “rigid response” to the demands (and potential sanctions) of high-stakes testing can derail long-term improvement. Early studies of SIDU (Kennedy and Datnow 2011; Marsh et al. 2014; Park et al. 2013) suggest that the language of well-intended classroom continuous improvement structures may fall victim to “accountability creep”—that is, robust learning goals may give way to test-centric goals. Thus, as teachers grow in their understanding of SIDU as a nuanced process, they may be able to reject the methods of use that they themselves find unproductive in favor of more constructive, progress-oriented strategies.

#3: Attending to theories of action

As research develops around these nuances of SIDU, one avenue we think particularly critical is that which examines educators’ underlying theories of action around the practice. We need to know more about why educators engage in this practice and what they think happens within the process to catalyze improved learning. And, these theories-of-action must subsequently be held up to the light of research to test those assumptions.

For example, we noted our own approach to this work was rooted in our interest in achievement goal theory. From this perspective, our examination of the literature highlighted a particular concern—that well-intended professionals operating from different theories-of-action might engage in “data chats” with two students and encourage further effort and participation on the part of one and unwittingly discourage risk-taking and learning in another. The work of Dweck and colleagues (Blackwell et al. 2007; Dweck 2006) suggests that how feedback is provided is a nuanced but critical component of how a student receives and internalizes feedback in deciding how to further engage (or disengage) in the learning process.

Figure 1 illustrates our thinking around how a teacher’s theory of action might affect the types of feedback offered to or verbal prompting of a student. The “feedback” component of this model reflects the belief that robust learning requires the provision of timely, constructive feedback. This line of thinking proceeds as follows: “We want students to learn. Learning requires ample and actionable feedback. This process provides structures to make such feedback a consistent part of the instructional process.” This theory of action is well-supported by decades of research on the power of formative assessment practices (Black and Wiliam 1998; Cauley and McMillan 2010; Clymer and Wiliam 2007; Halverson 2010; Hattie and Timperley 2007; Heritage 2007; Shute 2008; Stiggins 2005).

Fig. 1
figure 1

Theories-of-action for student-involved data use. This figure illustrates how apparently similar theories-of-action underlying SIDU may diverge, according to goal achievement theory

The next level of the model, however, reflects a divergence in how educators seem to think this feedback catalyzes change relative to how students assimilate feedback and subsequently approach learning tasks. The left side of the figure reflects a theory of action suggesting that feedback catalyzes change by providing information (process or content) that assists students in working toward incremental improvement toward a standard. These teachers aim to grow capacity among students. They likely eschew the promotion of normative competitions and instead focus on individual learning goals and individual growth: the child who starts the year as the slowest reader, and who ends up the year as the slowest reader, but who has increased fluency by 20 words per minute, still has much to celebrate in these classrooms. It is, essentially, a reflection of an educator who approaches SIDU with the intent to promote a mastery orientation.

The right side of the figure reflects the theory of action for SIDU among educators who believe that feedback catalyzes change by activating the desire to win or to be recognized. These teachers may aim to activate existing capacity within students—the students simply need to “try harder” or “focus.” These educators believe that knowledge of how students are performing in relation to others or to other classes will catalyze greater effort. The child who starts the year as the slowest reader, and ends up the slowest reader—even if increasing fluency—has less to celebrate in this case, as her satisfaction in improvement may be dampened by realizing she’s still in “last place” normatively speaking. In sum, this lens suggests that if teachers are not thoughtful about their respective approaches to SIDU, then many will likely take the route of some of the educators noted in Marsh et al. (2014) and engage in highly performance-oriented activities—strategies that other researchers (e.g., Meece et al. 2006; Pintrich 2003) suggest may be counterproductive for long-term learning habits.

We offer this as one example of how examining the nuances of SIDU implementation from theoretical foundations may illumine the promises offered by constructive SIDU practices as well as the potential pitfalls of poor or hasty implementation. As research on SIDU proliferates, it is our hope that more models will connect theory to practice in ways that flesh out multiple areas of concern or promise for the practice, which will result in more consistent and constructive practice across schooling contexts.

#4: Improved knowledge requires improved collaboration

The question, then, becomes, “Where do we go from here? If we are interested in improving learning experiences for all children, how might we proceed?” We suggest a critical step involves collaboration among multiple parties to close the research-practice gap in the area of SIDU. We are not asserting that involving students in data use is a destructive or unproductive process—quite the opposite: we think the practice holds great potential as a communication strategy to better include parents in matters related to their children’s progress, as a model for helping students establish positive goal-setting and strategic planning habits, and as a tool to facilitate substantive reflection among teachers and students alike. But we also think these promises will remain unfulfilled in many instances if implementation of SIDU is haphazard.

One level of such collaborative inquiry must include early childhood, primary, and secondary practitioners (teachers as well as campus and system leaders). Researchers need to observe SIDU practices in action—from training to implementation to follow-up. They need to observe teacher–student dyads, and teacher–parent–student conferences to learn more about how these practices unfold on a day-to-day basis in classrooms. The field is ripe for studies that examine academic outcomes alongside other important outcomes: does involving students in the tracking and analysis of their own data promote students’ enjoyment of a subject? Do students who engage in data use develop greater perceptions of self-efficacy? Do the planning and decision-making processes modeled in “student data folders” translate well to other areas of students’ lives? What do parents expect or get out of such processes, and are there ways to support parental involvement in data-rich conferences? Such inquiry could also include major providers of training around the country; the field has much to learn from such providers, and materials and training may also be improved through such collaboration.

A second level of collaboration around SIDU should extend the inquiry beyond the area of DDDM or educational leadership and into multiple aspects of teaching and learning. Involving students with data use is a multifaceted problem. While scholars have particular research agendas and areas of expertise, multiple and varied perspective are needed to inform some of the questions surrounding student-involved data use—questions that need to be addressed for SIDU to be implemented with the greatest potential to improve the lives of students. Table 2 illustrates the diversity of potential inquiry related to SIDU that would be possible if scholars worked to transcend departments or research silos.

Table 2 Beyond DDDM: potential avenues of inquiry

Conclusion

“Student data folders,” “data walls,” and “data chats” are increasingly common aspects of practice among early childhood, primary, and secondary educators. Involving students in the tracking and analysis of their own data, and in establishing personal learning goals and reflecting on habits of work, have become cornerstones of continuous improvement efforts. Yet we know relatively little about this practice and how nuances of implementation may affect a variety of student outcomes. When we examine practitioner-oriented literature and research through the lens of achievement goal theory, we are both hopeful and concerned. We are hopeful because teachers who are careful to adhere to the tenets of promoting a growth mindset may well help students establish patterns of goal-setting, hard work, and personal improvement in many aspects of their lives for years to come.

We are also concerned—concerned that shallow or haphazard implementation of SIDU, or implementation without a rich understanding of the ways in which goal orientation can impact the ways in which students experience feedback—may unwittingly facilitate different outcomes. Students with a growth orientation may indeed welcome the opportunity to track personal data and may be able to use this information to support further learning efforts and, as a result, build competence and improve academic outcomes. However, children who begin the data use process with a performance orientation may experience such tracking of personal data as evidence of an inherent lack of intelligence, talent, or skill; for these children, the experience may reify the premise that they are simply inept in a particular area. What can spur one child on to greater effort and outcomes may present an unnecessary barrier to learning for another.

Because of these concerns, and because SIDU has been under-examined by the research community, we believe it critical to examine current practice to better understand when and under what conditions student-involved data use can be constructively used in schools. Research specific to these issues is still evolving. The practice continues to flourish in the world of practice—with or without a rigorous evidence base; it is therefore critical for scholars to engage in research that helps build a foundation for improved teaching and learning for all students.