1 Introduction and Description of the Emerging Technology

Academic research arising from the relatively new field of Learning Analytics is growing rapidly but, like much of the research on emerging digital technologies, the pace is not keeping up with commercial products aimed at leveraging the use of data produced by educational systems. This paper looks specifically at student-facing dashboards which are just entering the higher education marketplace. They are being integrated into existing applications for teaching and learning, such as Learning Management Systems, as well as in new applications for personalizing learning like gameful approaches to pedagogy (e.g., Gradecraft, see Aguilar et al. 2015) and tailored messaging systems (e.g., eCoach, see Huberth et al. 2015). Dashboards emerged as one of the first direct applications of Learning Analytics (Arnold 2010; Duval 2011), but early designs were aimed at academic professionals, advisors and instructors, who are expected to have the necessary training to understand these displays and the relevant experience to know what to do with the information they convey. However, increasingly the intended users are students (Wise 2014). This paper explores the potentials and key challenges when university students are provided with performance feedback via dashboards.

1.1 Learning Analytics Dashboards

An analytics dashboard provides a visual display of the important information needed to achieve one or more goals, consolidated and arranged on a single screen so the information can be monitored at a glance. Originating from analytics work driving Business Intelligence (BI) and Executive Information Systems (EIS), Learning Analytics dashboards have been developed with the intent of revealing otherwise undetectable patterns in the educational data generated by instructional technologies, primarily Learning Management Systems (LMS). The sources of student data generated by online learning platforms has opened up new avenues for understanding student behaviors and providing feedback about those behaviors (Young 2016).

Using transactional data from LMSs, early work on dashboards has focused on improving overall student outcomes (e.g., retention, persistence) by developing models that help users identify when specific students are at risk and in need of academic intervention. Purdue’s Course Signals (see Arnold and Pistilli 2012) was one of the first widely deployed systems to develop a “risk algorithm” utilizing student factors including pre-college preparation, college performance and LMS activity to categorize students’ risk for academic difficulty on a course-by-course basis. Called “Early Warning Systems” (EWS) or “Early Alert Systems” (EAS), these dashboards have been designed for use primarily by course instructors (e.g., Course Signals, see Arnold and Pistilli 2012; SNAPP, see Dawson et al. 2010) and academic advisors (e.g., Student Explorer; see Krumm et al. 2014) who are provided with current and historical information about students’ academic progress and course engagement. The algorithms underlying these systems are based on LMS variables, typically the course grades embedded in the gradebook and simple counts of login activity throughout a term (for an example, see Figure 1 in Krumm et al. 2014; Figure 4.3 in Pardo and Dawson 2015).

Dashboard algorithms are generated and validated using data mining techniques on historical behavioral data and student outcomes to support reliable assessment of patterns that typify underperformance and ultimately course failure. Learning analytics dashboards typically display each individual student’s standing relative to class averages, coded for quick identification of those students in the most pressing need of intervention. When the intended user is an instructor or advisor, they make sense of the data guided by multiple factors, including their familiarity with the students, courses, and even individual instructors to recommend specific actions to students.

2 Relevance for Learning, Instruction, and Assessment

Learning Analytics dashboards are rapidly moving into student-facing applications, where performance data is provided directly to the students. Many educational technologies provide students with some form of feedback, from the level of individual problems (outcome feedback) to general progress indicators (Bodily and Verbert 2017). Highly instrumented cognitive tutors, web-based practice and assessment systems in commercially available online homework systems, and Kahn Academy are all well-known instances of systems where students receive personalized feedback. Dashboards are a different form of personalized feedback in two key respects. First, the information is highly visual, providing an abstract representation of performance that requires students to make sense of, use, and connect data to specific actions. As meta-cognitive tools (Durall and Gros 2014), student-facing dashboards are expected to lead to improved performance by supporting awareness, self-reflection, and sense-making (Verbert et al. 2013). Although the field of information visualization has demonstrated that data displays can be a tool to support sense-making (Tufte 1990), Khan and Pardo (2016) warn that in an educational context, “Using data to support decision making processes is complex and may be more effective when mediated by an expert” (p. 249). Advisors and faculty are expected to have the necessary experience to interpret data, but university students’ data literacy and data-driven decision making have not been investigated with respect to understanding dashboard representations and connecting the information displayed to needed actions.

Second, student-facing dashboards typically provide visualizations of the students’ own activity with online tools (overall LMS activity or use of specific tools) and performance (grades) shown in comparison to their peers (see Figs. 1 & 2 for views of two current dashboards containing comparison information), a design feature that could be informed by the literature on social comparison theory (Festinger 1954), and related work on motivation (Suls and Willis 1991) and self-regulation (Butler and Winne 1995). Looking specifically at the use of “leaderboards” as a design element in educational games provides one source for understanding the potential impact of direct comparison between peers’ academic performance. Leaderboards consist of a visual display that ranks students by name according to their performance, providing them with a way to directly compare themselves with their classmates. Aimed at promoting competition and status acquisition (Blohm and Leimeister 2013), the research on leaderboards to date suggests some caution is advised with their use. Because leaderboards are typically implemented as one of several gameful elements, it is difficult to disentangle their impact apart from other aspects of a gamified system. However, Domínguez et al. (2013) implemented a gamification plug-in for the Blackboard LMS, and found that students did not enjoy the competitive elements, especially the leaderboard. Research conducted by Fox (Christy and Fox 2014; Hanus and Fox 2015) has shown that the use of leaderboards can have unanticipated consequences on motivation that negatively influences academic performance. Specifically, Hanus and Fox (2015) concluded, “our combination of leaderboards, badges, and competition mechanics do not improve educational outcomes and at worst can harm motivation, satisfaction, and empowerment” (p. 159). Dashboards are different from leaderboards by providing social comparison that presents the individual student against aggregated and therefore anonymous peers. However, it’s not clear if this leads to more globalized self-assessment (positive or negative) rather than allowing students to interpret their position relative to information known about specific classmates (e.g., I know that person is smarter than me).

Fig. 1
figure 1

Student-facing dashboard from Blackboard

Fig. 2
figure 2

Student-facing dashboard from eCoach

3 Emerging Technology in Practice

Because analytics-based student-facing dashboards are just beginning to be used at scale, there have been few studies that systematically evaluate their effectiveness or direct impact in higher education. In addition to Arnold and Pistilli’s (2012) work on Signals, Fritz (2011) conducted early research on the wide-scale deployment of a tool for displaying performance information directly to university students. The Check My Activity (CMA) tool allowed students to compare their own LMS activity and grades against a summary of their course peers. Fritz describes that “students value (and obsessively check) the grade book far more than any other function” (p. 92). Two studies by Corrin and de Barba (2014, 2015) have shown that students are able to articulate and interpret feedback presented through a dashboard, but there was evidence casting doubt about students’ ability to understand the connection between feedback and their current learning strategies. Using a system called Data2U, Khan and Pardo (2016) characterized four types of student dashboard use, providing insight into when different students utilize the dashboard relative to their study behavior. However, there was no statistically significant relationship between use of the dashboard and academic performance.

A recent study by Reimers and Neovesky (2015) showed that while students would like more information about course deadlines and other organization supports, there was no clear opinion about whether notification about learning activities would be motivating and many students disagreed or totally disagreed with the statement, “I would like to compare my performance with my fellow students.” The issue of student motivation was specifically addressed by Lonn et al. (2014) who found that students’ exposure to comparative information about their course performance when viewed on an academic advisor’s dashboard may have contributed to an acceleration in decreasing mastery orientation. In a follow-up study, Aguilar (2016) provided performance dashboards to students considered to be “at risk” by their university program and found that these students were sensitive to comparative information, although they indicated that they would it seek out if it was offered in a LMS. In a recent project (Teasley et al. 2017), we examined how students’ prior academic performance affected how they reacted to comparative information shown in a dashboard. We looked specifically at whether the content of the feedback (above vs. below class average performance) and its consistency with students’ overall academic achievement (existing grade point average) affected students’ responses to the dashboard. Our results showed that while most students found the dashboard visualizations informative, there were differences between students’ potential use of the dashboard and how they interpreted the impact of the information provided. Specifically, students receiving consistent feedback (High Performance Feedback/High GPA and Low Performance Feedback/Low GPA) found the dashboard visualizations to be more helpful for making decisions about how to obtain the grades they wanted than did the students receiving inconsistent feedback (High Performance Feedback/Low GPA and Low Performance Feedback/High GPA).

4 Significant Challenges and Conclusions

While more research on student-facing dashboards is needed, it seems likely that designing systems with “one size fits all” displays—where all students’ performance is assessed by a single algorithm and they see the same format for the feedback—may be unwise. With the opportunity to combine LMS data with other data from student information systems, dashboards could be designed to provide personalized displays that are based on research examining systematically how internal and external factors affect student motivation. For example, social comparison theory (Festinger 1954) has shown that students’ comparisons with others who perform worse (downward comparison) have been shown to lead to feelings of superiority and positive affect (Major et al. 1991) while comparisons with others who perform better (upward comparisons) can evoke negative affect and lower academic self-concept (Dijkstra et al. 2008). Therefore, students who are underperforming relative to their past experience or current expectations about a specific course could be provided with a dashboard that directs them to information about how to improve their grade rather than being shown an upward comparison with peers. In gateway STEM courses where most students experience a “grade penalty” relative to their existing GPA (Koester et al. 2016), it is an empirical question whether dashboards with comparison information should be made available to students. For example, providing comparative information in these classes may heighten the kind of stereotype threat that women and underrepresented minorities experience in gateway STEM courses (Spencer et al. 1999).

Further, there is a large body of literature examining the effects of feedback on student learning, and the results are highly variable (Hattie and Timperley 2007). The conditions or moderators of the effect of feedback on performance, which include intrapersonal factors (motivation, task focus, and goal orientation) as well as external factors (context, feedback timing, and frequency), are poorly understood although the assumption that feedback interventions are beneficial is largely unchallenged. Winstone et al. (2016) caution “Inevitably, the benefits of receiving feedback are not uniform across all circumstances, so it is imperative to understand how these gains can be maximized” (p. 1).

In summary, dashboards hold the potential for both promise and peril for motivating students. Given the rapid pace in which student-facing dashboards are being deployed in educational technology platforms, there is a pressing need to inform the design and application of these systems so they can fulfill the promise to support student’s awareness, self-reflection, and sense-making. Knowing who should see what information and when, how it might be best presented to individual students, and how to integrate dashboards into the larger pedagogical practices in higher education are important questions that call out for further research.