1 Introduction to Learning Analytics

Rising use of digital technology across education has heralded an increased focus on the potential of data to inform our understanding of student learning. Increasing attention to learning data has come from a number of sub-disciplines in education, most recently the emergence of ‘Learning Analytics’, a field with specific focus on the use of data derived from student learning to inform that learning. As Ferguson’s (2012) “The State of Learning Analytics in 2012: A Review and Future Challenges” charts, the developmental trajectory of learning analytics has been driven by an interest in applying business intelligence techniques to the increasing amounts of data made available through virtual learning environments and other learning systems. This interest has shifted from primary concerns around accountability and efficiencies, to an increasing focus on pedagogic concerns.

The field of learning analytics has seen keen interest in understanding how to effectively implement novel techniques, which support learning. Across educational stakeholders there is an increasing desire to use data effectively to inform and understand practice; in particular, there is a strong desire to achieve impact through implementing and supporting effective learning strategies. Learning analytics can contribute to this, but to do so, we need to develop a deeper understanding of the kinds of problems it can tackle, and how to integrate analytics into practical pedagogic contexts.

There is a growing expectation that educators use data as a form of evidence of student learning (and course evaluation). This shift in focus onto learning analytics as a form of assessment data highlights two intertwined concerns. First, as others have noted, educators must have a degree of data literacy to be able to navigate the, often quantitative, information that they are provided with (see, for examples, Mandinach 2012; Mandinach and Gummer 2016). Second, this data literacy must have a focus on how data is used in context to make decisions. Such a perspective prompts a move beyond simply supporting educators in navigating large-scale assessment data or predictive models based on Learning Management System (LMS) log data, to supporting educators in making practical decisions about the data they collect and make decisions on.

This chapter focuses in on a particular component of those challenges, suggesting that a key path for the impact of learning analytics is a focus on ‘intelligence augmentation’ (IA) over artificial intelligence (AI).Footnote 1 That is, rather than focusing on how artificial intelligence might be deployed for automation of tasks, I shift focus to explore how artificial intelligence can augment assessment, in particular by amplifying the impact of high quality assessment with learning analytics derived feedback. Such an approach has dual benefits, in ‘bringing along’ educators in the implementation of learning analytics, though supporting their existing practice rather than requiring wholesale changes, and provides sites for learning analytics in which there is clear pedagogic potential through understanding that existing practice.

In this chapter, I introduce a broad model of assessment as a fundamental design context for educators. I then present three ways in which learning analytics might intersect with this model, arguing that ‘augmenting assessment with learning analytics’ provides for particular benefits. I will then illustrate this approach using a particular assessment context (peer assessment), and various sites for augmentation in that context, drawing on examples from the literature to do so.

1.1 Understanding Assessment as Design

In this chapter I frame assessment building on the social constructivist model of assessment processes in Rust et al. (2005). I take it that when educators set about designing assessments, their fundamental question is “How do we get knowledge of what the students know?” (Committee on the Foundations of Assessment 2001). A key part of this assessment process is the bringing to alignment of student and educator expectations around what is to be learnt, with active engagement with criteria development and feedback from both educators and students (Rust et al. 2005).

In a simplified model based on this perspective (Fig. 10.1), educators across educational levels engage in assessment design as a component of their course design. In so doing, they design assessment tasks and criteria by which to assess the completion of those tasks (the top box in Fig. 10.1). In order to undertake this design work, educators should have a rich understanding of the learning context alongside the assessment literacy to design and deploy tasks to probe learning. They must, therefore, understand how the assessment is related to the students learning, how assessments are constructed (by themselves or others) as measurement tools (e.g., validity, feedback, etc.), and knowledge of the range of assessment types and responses to them (Price et al. 2012). At most basic, educators might consider whether the assessment is summative in nature (sometimes called ‘assessment of learning’) – end of unit examinations, for example – or is intended to provide formative feedback towards further learning (sometimes called ‘assessment for learning’).

Fig. 10.1
A cyclic diagram of a simplified assessment task illustrates the design assessment task and criteria, and complete the assessment task and receive a grade.

A simplified assessment model, in which educators design assessments that students complete

Influencing this process are characteristics such as the educator’s epistemic cognition (how they think about the student’s knowledge and the influence of that on their instruction) (Barnes et al. 2017; Fives et al. 2017). That is, when educators create assessments they are making judgements about what they would like their students to attain, and must consider the question “what will I learn about my students from the formative assessment event?” (Fives et al. 2017, p. 3). Even in a simplified model such as the one below, we hope that educators will also engage in a reflective process of using the students’ outcomes in their assessments to (a) drive instruction, and (b) revise the assessment tasks for future iterations. That is, that the assignments students submit are a learning opportunity to develop courses and modules, as well as to develop the students’ individual learning.

As a part of this process of assessment – as reflected in the lower box in Fig. 10.1 – students of course must complete the assessment tasks, and receive marks on them against the criteria. Again, here, in addition to the content knowledge, students must engage a degree of assessment literacy, to understand what is being asked of them and how best to display this (Price et al. 2012). On receipt of their feedback (perhaps in written form alongside grades or rubrics), they should also engage in a reflective/metacognitive cycle, again influenced by individual differences such as achievement goals, and epistemic cognition (Muis and Franco 2009; O’Donovan 2017).

To extend this simplified model, increased attention has been paid to the implementation of feedback and feedforward cycles in assessment processes. Feedforward is formative feedback that supports students in understanding the assessments they will complete and their criteria, through engagement with criteria (perhaps even writing their own) and exemplars, and so on (Wimshurst and Manning 2013). Feedback, then, is formative feedback provided post-assessment activity to support students in understanding why they received the mark that they did, and how they might improve in the future. Thus, Fig. 10.2 indicates a cycle of assessment with feedback and feedforward; breaking down possible sites for augmentation with learning analytics.

Fig. 10.2
A cyclic diagram of assessment with feedforward and feedback. It includes the design assessment task and criteria, feedforward formative assessments, complete assessment tasks and receive the grade, and feedback formative assessments.

Assessment with feedforward and feedback

1.2 Models for Transforming Assessment with Learning Analytics

One way of conceiving of learning analytics and its role in teaching and learning is as a tool for assessment (Knight et al. 2014). In this model, through the analysis of process-based trace data and artefacts created in learning tasks, researchers and other stakeholders aim to make claims about that learning, at any one of the four components above. For example, we might provide feedback on a written essay (artefact), or the writing process (process), and this feedback might come before or after the final submission, provided to the student, or to support the educator in evaluating and revising their assessment design. In this understanding of learning analytics, we can further conceive of two broad approaches.

One potential is for learning analytics to facilitate a shift away from the summative assessment of artefacts produced. Instead, learning analytics might facilitate more process-based assessments such as choice based assessments, that explore the meaningful choices that students make in completing tasks (Schwartz and Arena 2013), and the processes undertaken in ‘performance assessments’ of authentic tasks (Benjamin et al. 2009; Darling-Hammond and Adamson 2010; Linn et al. 1991; Pellegrino 2013; Stecher 2010). New forms of feedback can thus make use of the trace data in “data-rich” learning (Pardo 2017). However, work adopting this approach requires the collection of new types of data, potentially using new technologies to collect that data, to be represented back to learners and educators in ways that also require research. The development of assessments based on novel process-based data is challenging, as work on intelligent tutoring systems has shown.

Thus, this development is likely to be time-consuming, expensive, and require systemic changes. As Roll and Wylie (2016) note in the context of the 25th anniversary of the Journal of Artificial Intelligence in Education, in order to achieve impact in education, researchers should undertake two strands of research: evolutionary, with a focus on existing classroom practice; and revolutionary, with a focus on more wholesale change of systems. This former, evolutionary approach, then, looks at how learning analytics can augment existing systems. In contrast the latter might involve development of new types of assessment, and the automation of existing assessments to create structural change in systems. These three approaches will be discussed, followed by a focus on the potential of augmenting in the rest of this chapter.

1.2.1 Approach 1: Learning Analytics for New Types of Assessment

Learning Analytics for New Types of Assessment

First, learning analytics can be developed to assess constructs that were not readily assessable using traditional methods. This is including approaches such as intelligent tutoring systems, or choice based and performance assessments, that follow procedures such as evidence centred design (Mislevy et al. 2012) which provide a logic for mapping behaviours to target constructs (such as self-regulation, collaboration, etc.). In this model (see Fig. 10.3), whole assessment structures can be redesigned, with criteria based on fine-grain analysis of the knowledge components in a domain, and feedback and task completion based around completing defined activities with practice algorithmically oriented to the target constructs.

Fig. 10.3
A cyclic diagram of new learning analytics based assessments. It includes the design assessment task and criteria, feedforward formative assessments, complete assessment tasks and receive the mark, and feedback formative assessments.

New learning analytics based assessments

1.2.2 Approach 2: Learning Analytics to Automate Assessments

Learning Analytics to Automate Assessment

Second, learning analytics can be used to automatize existing assessment structures. Thus, rather than tutors or instructors engaging in grading, a system is developed to automatically assess, for example through the automated essay scoring systems (Shermis and Burstein 2013). In this model (Fig. 10.4), substantial portions of the assessment structure may remain static, with analytics targeted either at the automation of existing work (e.g., automated essay scoring), or at scoring based on process data collected in the creation of artefacts aligned with existing assessments.

Fig. 10.4
A cyclic diagram of automated assessment. It includes the design assessment task and criteria, feedforward formative assessments, complete assessment tasks and receive the mark, and feedback formative assessments.

Automate existing assessment or/and add new process/product data

1.2.3 Approach 3: Learning Analytics to Augment Assessment

Learning Analytics to Augment Assessment

Lastly – and the focus of this chapter – learning analytics can be used to augment assessment, to support the analysis and feedback on existing produced artefacts or process in the context of well-established effective e-assessment or technology enhanced assessment. That is, rather than focusing on developing new assessments or methods of completing assessment, instead focus on augmenting existing assessment structures through the augmentation of feedback and feedforward processes for effective assessment. In this model (Fig. 10.5), rather than seeking to automate grading (or enrich it through analysis of process data), the focus is on the formative feedback and the potential of learning analytics in that space. As highlighted below, this approach has some distinctive benefits.

Fig. 10.5
A cyclic diagram of Augment formative assessment. It includes the design assessment task and criteria, feedforward formative, complete assessment tasks and receive the mark, and feedback formative.

Augment formative assessment

This chapter argues that to achieve maximum impact and adoption, educators and learning analytics researchers should work together to develop approaches to assessment augmented by learning analytics. Such augmentation has a dual effect of increasing adoption of learning analytic approaches, thus potentially opening the door to more revolutionary (than evolutionary) changes, while also increasing adoption of existing good practice through the support of that practice via learning analytics.

1.3 Benefits of Augmenting Assessment with Learning Analytics

Writing in the 25th anniversary issue of the International Journal of Artificial Intelligence in Education (IJAIED), Baker (2016) suggests that AI researchers working in the education space should shift focus. In that piece, Baker notes that adoption of AI technologies in education has not been widespread. He thus suggests that, instead of developing ever smarter intelligent tutoring systems, researchers might instead focus on amplifying human intelligence, supporting humans to make decisions through the use of intelligently designed systems. Such a proposal would, for example, shift attention away from auto-scoring systems, and towards using the same data to report to educators and students in intelligently, to support them in making decisions.

Baker notes a number of potential advantages to such an approach, including that it provides for more flexibility of response and intervention. This is because such an approach does not necessitate the assumption that students in the future will have the same behaviours as those modelled now, nor that the interventions remain the same. That is, a risk of automating existing processes is that it reifies existing practice in both assessment and outcome, where this may not be appropriate. An augmented approach, then, does not – necessarily – require the large-scale design and implementation of pedagogic agents, which is an expensive and time consuming process. Instead, the focus is on how technologies can be used flexibly to augment intelligence.

As Zhao et al. (2002) highlight, the distance of any innovation from existing: culture, practice, and technological resources, will impact its uptake by educators. That is, innovations will not be taken up that: are counter to existing cultural context; do not align well with practices of individual educators; or require significant change to available technology. In a similar vein, in their introduction to a panel discussion on overcoming barriers to achieve adoption of learning analytics, Ferguson et al. (2014) note the need to consider:

  • institutional context and culture;

  • buy-in from stakeholders; and

  • understanding of specificity and user needs.

Distinct advantages of focusing on the potential of learning analytics to augment assessment are that such an approach should: align better with existing culture and practice; be less likely to require significant technological change; and keep decision making firmly within the autonomy of the educator. As such, this approach is a way to support and enhance existing practice, raising awareness of the potential of learning analytics, in turn increasing the potential for impact from those analytics in existing and novel applications. Although over a longer term a focus on systems change – and implementation of new assessment structures – is likely to be an important component of educational improvement, I suggest that this is more likely – not less – with a key focus on augmentation of assessment with learning analytics. As such three key advantages of augmentation emerge:

  1. 1.

    Augmentation provides for easier integration

  2. 2.

    Augmentation is more flexible in use

  3. 3.

    Augmentation has lower upfront cost.

In proposing this approach, an explicitly design oriented perspective is taken. As Lockyer et al. (2013) highlight, because learning analytics provides new methods for data capture (in place of self-report methods), it can help educators in testing their assumptions regarding their learning design. Learning analytics, then, offers the potential to deliver on the promise of technology enabled assessment for learning (Crisp et al. 2016; Dawson and Henderson 2017), with feedback to students identified as one key target for such analytics (Timmis et al. 2016). By augmenting assessments with learning analytics, educators and students can receive a richer picture of what learning is taking place, and be supported in both developing the learning tasks and their responses towards those tasks.

1.4 Augmenting Assessment with Learning Analytics: A Worked Example

In this model, then, developing approaches in learning analytics would be used to provide feedback to students and educators for their educational decision making. This feedback might be diagnostic or formative in nature, providing information on specific areas of weakness and providing information on how particular features of their learning might be changed. Such augmentation might also simply provide additional information to stakeholders that they could use to support their decision making. For example, topic modelling might indicate that students wrote about two themes – a piece of non-normative or evaluative information – which could be used to target specific feedback content.

In the following sections some key advantages of this approach are drawn out through their exemplification in a particular set of design cases. These designs are based on a basic model of peer assessment, which may be used in both feedback and feedforward contexts. Each of these designs has in fact been implemented as indicated by citations throughout, although the combined set of design patterns has – to the best of my knowledge – not been brought together. In highlighting these designs, I wish to draw attention not only to the specific uses being developed, but also to the general approach to augmentation and its potential to improve the adoption and integration of data informed approaches in education.

1.4.1 Peer Assessment

In peer assessment models, students engage in assessing their peers’ works for formative or summative purposes, while their own work is assessed by those same peers. The benefits of peer assessment for learning are well documented (for example, Topping 1998), with increased attention on specific design configurations to support its effective implementation (Strijbos and Sluijsmans 2010). In addition, peer assessment can produce reliable grade-feedback (for example, Cho et al. 2006b), and has clear potential in e-learning contexts (Whitelock 2010).

However, amongst both students and educators, there is often a focus on the potential of peer assessments to reduce the staff marking burden, and other features that relate to structural concerns around the fairness, efficacy, and quality of peer assessment. Students can have a perception that peer assessment is inaccurate or unreliable, thus reducing their motivation to participate, in addition to which students (like other assessors) do in fact disagree. To address these concerns, multiple designs can be adopted that augment a basic peer assessment design such as that shown in Design 10.1. These designs may be further augmented by learning analytics, as indicated in the following iterations.

Design 10.1: Peer Assessment

  • Problem: We want our students to develop their assessment literacy through applying the assessment criteria, and providing and receiving feedback on their work for formative purposes.

  • Task: Peer assessment involves students assessing each other’s work, typically prior to submitting a final version of the same work.

  • Tools/materials and participant structures: Peer assessment is typically conducted individually, often with students asked to assess multiple assignments (although this requirement may be removed for formative purpose). Assessment is typically anonymous, and involves provision of both a score and comment.

  • Iterations and Augmentation: Peer assessment may be used as a stand-alone assessment design, or augmented by one of the designs below.

Developing the basic peer assessment design, we can add a number of complementary designs, that extend peer assessment, and may be augmented by learning analytics. For example, in Design 10.2, we see the addition of automated peer-allocation. This simple technique – based on prior assessment data, automated essay scoring, or topic modelling – can be used to allocate peers in a way to support all students learning. For example, prior research indicates that all students can benefit from feedback from lower achieving students, but that low ability students tend to benefit more from students who are similar to them (Cho et al. 2006a). So, it may be desirable in a number of contexts to assign peer assessment artefacts on the basis of prior knowledge, the content of the artefact, or some other features.

Design 10.2: Peer Allocation

  • Problem: Peers may be assigned texts that are not aligned with their content knowledge, or that fail to support their learning (because they are misaligned in terms of quality).

  • Task: This design complements the peer assessment design (Design 10.1). In peer assessment, students can be assigned to assess either convergent or divergent work (i.e., work that is of a similar quality/topic as their own, or diverges from their own).

  • Tools/materials and participant structures: As in peer assessment.

  • Iterations and Augmentation: Learning analytics can be used to allocate peers automatically based on content or ability, for example, using prior assessment data, automated essay scoring, or topic modelling.

In addition, Design 10.3 provides a complementary pattern that introduces a ‘calibration’ or benchmarking task, in which students engage with assessing exemplar texts prior to their own task completion. Such tasks have, as with peer assessment more generally, often been discussed in the context of training for peer-assessment as a means to reduce the marking burden (for example, in calibrated peer review, see e.g., Balfour 2013). However, calibration tasks have a richer potential in engaging students in the application of assessment criteria to known exemplars, and the ensuing potential for diagnostic feedback and discussion about their judgements. In peer assessment, it is often the giving (not receiving) of feedback that students learn most from (Cho and MacArthur 2011; Cho and Cho 2011; Lundstrom and Baker 2009; Nicol et al. 2014). Similarly, in calibration tasks we would expect that by engaging students in giving feedback, they will learn. As such, supporting students in calibrating their judgement and feedback should support their assessment of their own work and address the concern that student’s feedback can be misaligned with that of their tutors (McConlogue 2015; Patchan et al. 2009; Rollinson 2005; Watson and Ishiyama 2012). One potential means through which to provide this support is in the form of ‘feedback on feedback’; that is, automated feedback on the quality of feedback that has been provided (typically in written form). This feedback might, for example, simply analyse student comments for phrases and words that are related to feedback categories (Whitelock et al. 2010, 2012), or investigate the kinds of feedback that are in fact acted on, and relate these to textual features of the feedback (Nelson and Schunn 2009; Nguyen et al. 2017; Yadav and Gehringer 2016).

Design 10.3: Calibration Tasks

  • Problem: Students should (1) critically apply the assessment criteria to artefacts of the form they will produce, (2) engage actively with exemplars, (3) calibrate their judgement of their own and other’s work based on feedback (directly, and mediated via the diagnostic information provided to an instructor).

  • Task: Students are asked to assess exemplars, typically 3–5 artefacts of varying quality or on different topics. Students use the assessment criteria to assess these exemplars, providing a grade or point score, alongside written feedback. Once they have done this, students can be provided with feedback on the ‘accuracy’ of their assessments, as well as seeing both instructor and peer feedback provided to the same exemplars. This design complements Design 10.1, and can be used as a standalone task.

  • Tools/materials and participant structures: This task is typically conducted in individual then group structures, with students assessing individually (perhaps after a period of discussion of the criteria), and then discussing the collective feedback.

  • Iterations and Augmentation: Diagnostic feedback can be provided to students and instructors in a number of ways. First, student judgements provide insight into how aligned they (individually, and as a cohort) are with the instructor’s assessments of the same artefacts. Second, students can be provided with support in giving high quality feedback, through the use of natural language processing on their written comments. This feedback has been shown to produce higher quality comments from students, which we would anticipate supporting their learning about the criteria and thus developing their own evaluative judgement.

Finally, peer assessment may also be augmented through providing students with targeting towards specific features of the artefacts they are asked to assess. This differs from Design 10.3, in that Design 10.3 is focused on specific features of the feedback, while Design 10.4 also draws on specific features of the assessed artefact. As such, Design 10.4 provides a mechanism to focus feedback. In some regards this design is similar to a complementary design (not expanded here) in which students are scaffolded in completing their peer assessment through the use of rubrics or specific questions (see, for example, Gielen and De Wever 2015), and indeed, such structuring may also be useful.

Design 10.4: Focused Feedback

  • Problem: We want students to draw on specific features of the artefacts that they assess in providing feedback.

  • Task: In asking students to provide feedback, we can foreground particular features of the artefact to them, through specific instructions and structured peer-assessment forms.

  • Tools/materials and participant structures: As in Design 10.1, with the possible additions of structured peer-assessment forms, and automated writing analytic tool access.

  • Iterations and Augmentation: Peers engage with artefacts that have been analysed and annotated, summarized or otherwise visualized to (a) support their own judgement of the work, and (b) mediate the automated feedback through a process of peer-sensemaking. This design complements Design 10.1.

Learning analytics can augment this approach by specifically foregrounding target features in the artefacts to the assessors, who then choose to use this support in writing their feedback. That is, the peer feedback can be used to mediate automated feedback via the peer’s elaboration and focusing. For example, in the Academic Writing Analytics (AWA) project (see, for example, Knight et al. 2017), natural language processing is conducted on student writing to detect the presence of ‘rhetorical moves’ (rhetorical structures in the text that provide structural markers to the reader). In preliminary work, peers have been asked to provide feedback to each other on their work, making use of the AWA tool to particularly foreground rhetorical structures in their comments (see, for example, Shibani 2017, 2018; Shibani et al. 2017).

2 Conclusion

Learning analytics can be used to support assessment processes, in the provision of feedback to both learners and educators regarding evidence of learning. This paper has discussed different ways in which learning analytics can be integrated into practical pedagogic contexts. This chapter has argued that one means through which learning analytics can achieve impact is through augmenting assessment design. Impact will be achieved by combining learning analytics with effective assessment through the kind of augmentation approach described here. I have argued that technology integration is a key consideration in taking this approach as a means to achieve impact with learning analytics. The flexibility of use of learning analytics in the context of designing assessments has been illustrated at a basic level through a set of peer assessment design patterns which illustrate how learning analytics can augment assessment. In each case, a core assessment design is highlighted, with the potential to augment that design with learning analytics tools demonstrated. By adopting such an approach, we can foster the testing and development of learning analytics with educators in the loop, making key decisions about how analytics relate to and impact their teaching contexts, and provide support for existing good practices in assessment design.