Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Introduction

There is much hype around the potential for technology to enhance assessment, including how it can enable the scaling up of Assessment for Learning (AfL). At the time of writing, technologies that enable new possibilities for assessment are prominent in two key hype barometers: the ‘Hype Cycle for Education’ (University of Minnesota, 2016) and the New Media Consortium’s Horizon Report for Higher Education (Johnson et al., 2016). Technology can support scaling up of educational practice in a range of ways. It can allow us to do ‘more with less’, such as provide video feedback to students who traditionally received written comments, without requiring more staff time. Technology can enable us to scale up our thinking from single units or modules towards approaches like portfolios and curriculum maps that require thinking at the course or programme level. Technology can also enable near-infinite scaling up of AfL through approaches like Massive Open Online Courses (MOOCs).

Given all this potential, it is understandable that educational technology can evoke excitement, enthusiasm and even evangelism. ‘Technological determinism’ is the notion that technology in and of itself can change education (Oliver, 2011). It puts technology in the driver’s seat with pedagogy its passenger. This is part of the ‘positive project’ of educational technology: an underlying belief that technology is a good thing for education (Selwyn, 2011). Phrases like ‘technology-enhanced assessment for learning’ carry with them the suggestion that technology can, will or does enhance education.

But technology does not always live up to the hype. In their extensive review of educational technology research, Tamim, Bernard, Borokhovski, Abrami and Schmid (2011) begin by retelling Thomas Edison’s 1913 claims that the motion picture would very soon make books obsolete. This has been a recurring theme in education, in which emerging technologies are hailed with great fanfare but have at best only a modest impact on educational practice, supported by largely ungeneralizable research. An example is that of virtual worlds such as Second Life, which has been claimed to support experiential and situated learning including continuous cycles of feedback (e.g. see Dawley & Dede, 2014). However, while these studies argue that virtual worlds can improve feedback, they also confirm Warburton’s (2009) observation that technology-enhanced AfL is unlikely to be a ‘quick win’ but rather a result of considerable risk mitigation and pedagogical strategic planning. The focus on the potentiality or ‘state-of-the-art’ uses of digital technologies in education has largely obscured the compromised and constrained ‘realities’ (Selwyn, 2010). Indeed, Laurillard (2008) has wryly observed, ‘education is on the brink of being transformed through learning technologies; however, it has been on that brink for some decades now’ (p.1). This arguably blinkered perspective, focusing on the ‘potential’ for technology to ‘enhance’ and ‘provide opportunities’, can also be also be found throughout the research literature relating technology-enhanced AfL (e.g. Gikandi, Morrow & Davis, 2011).

This chapter brings a critical perspective to the question of how technology may support the scaling up of Assessment for Learning in higher education. We make particular reference to the core AfL strategies from Carless, Chap. 1, this volume: productive assessment task design, effective feedback processes, developing student understanding of the nature of quality and students practising making judgments. Through synthesis of the literature on three sites of technology-supported AfL – feedback, programme-level portfolios and MOOCs – we explore issues of scale, context and unintended consequences. Although technology can allow us to do more AfL with less, across more curricula, and for more students, we have reason to be cautious and sceptical.

Technology-Enabled Assessment for Learning

The broad field of educational technology has been critiqued as being so obsessed with the ‘state of the art’ that it misses out on what actually happens in students’ and teachers’ lives, that is, the ‘state of the actual’ (Selwyn, 2010). What is the ‘state of the actual’ of technology in AfL? Despite the bright potential of assessment technologies, their wide-scale adoption has been much slower than technology advocates expected (Warburton, 2009). Even the research literature, which is dominated by intervention studies conducted by researchers (Stödberg, 2012), demonstrates slow progress. Stödberg’s (2012) structured review of the literature indicates that the typical technology-supported assessment study is small scale, short term and focused on a multiple-choice intervention. When compared with ‘state-of-the-art’ approaches, like high-fidelity simulation or intelligent tutoring systems, it could be easy to get disheartened by the slow progress towards technology enablement of assessment for learning at scale.

However, assessment is notorious for being resistant to change. Assessment at universities is a complex system with many actors and a slew of policy and bureaucracy (Macdonald & Joughin, 2009). Within this context, educators and students are engaged in a range of (dis)trusting relationships (Carless, 2009), with risks and anxieties (Deeley & Bovill, 2016). Resistance to assessment change is powerful (Deneen & Boud, 2013), as is resistance to educational technology change (Blin & Munro, 2008). Technology-enabled AfL sits at the intersection of this resistance, so it is perhaps unsurprising that progress has been somewhat slow.

One area where large gains have been made has been the adoption of online submission and return of assignments. In an Australian context paper, submission of assignments is becoming uncommon. Although mundane, the online submission and return of assignments enables a range of assessment practices in a digital context. However, in and of itself, online submission does not enhance assessment or achieve AfL. As an example of the difference, online submission and return of assignments enables the use of feedback comment banks, ranging from low-tech copy-pasting from a list, to high-tech dedicated tools. This enables teachers to provide more information to students, but the degree to which that information enhances student learning remains reliant on the feedback technique of the marker, student attitudes towards feedback, student feedback literacy and the overarching feedback design of the assessment sequence. Further complications of online submission include double-handling of work (e.g. through printing, writing comments and then typing those comments later), slower on-screen marking, technology skill level and resistance (Tomas, Borg, & McNeil, 2015). Technology enables approaches, but logistics and staff experience can prevent enhancement. Technology enablement that is too difficult or unpleasant for staff faces great impediments in improving assessment (Bennett, Dawson, Bearman, Molloy & Boud, 2016).

Rather than technology-enhanced AfL, in this chapter, we adopt the relatively neutral term ‘technology-enabled Assessment for Learning’ focusing on technology as not the enhancer or improver but as a tool that provides affordances which may enable assessment approaches. In the next three sections, we explore how technology can enable approaches that scale up AfL in three different ways.

Scaling Up Through Feedback Efficiency: Digital Modalities in Feedback Cycles

Feedback can powerfully influence student learning, and effective feedback processes are a core strategy of AfL. Large-scale meta-analysis of existing research in school education concludes that feedback has a substantial effect on learning in that sector (Hattie & Timperley, 2007). Compelling arguments have been made that these findings are transferrable to higher education and that they are consistent with the higher education-specific feedback literature (Hattie, 2009). Feedback even underpins the causal mechanisms of most of the top 10–20 factors that enhance student achievement (Hattie, 2009). Feedback is therefore a critical site for scaling up AfL. This section explores recent work on using technology to do more feedback with less time and resources.

Technology-enabled feedback approaches are discussed in depth (Moscrop and Beaumont, this volume). In brief, in recent years, there has been an increasing interest in providing feedback through different media in the AfL literature. These approaches typically replace written comments on student work with audio, video or screencast information (e.g. Henderson & Phillips, 2015). There are also bodies of research around efficient use of text-based feedback, such as writing effective feedback for online multiple-choice exams (Lefevre & Cox, 2016) or using feedback comment banks (Debuse & Lawley, 2016).

A key issue in any discussion of scaling up is that of sustainable workload. After an initial time investment in learning to provide feedback in this mode, the use of audio, video and screencast feedback has been reported to take less time while producing a greater volume of feedback information (Henderson & Phillips, 2015; Lunt & Curran, 2010) although some proportion of the extra feedback volume as measured by words may be due to the differences between spoken and written text (Laughton, 2013). Henderson & Phillips (2015) claim that the media affordance of communication efficiency (greater volume of words coupled with richness of media such as gesture and intonation) increased the number of issues that could be discussed, as well as the clarity (as reported by students), particularly in relation to complex issues such as drawing connections between current performance and what needs to be worked on in the future. However, they also point out that in their designs they also spent more time on relational and contextual issues, including recognizing and valuing the student’s performance in context of personal circumstances. They claim that this both draws on and reinforces the pedagogical relationship between teacher and student thus facilitating student engagement with the substantive feedback comments.

These approaches scale up AfL by ‘doing more with less’, an approach to scaling AfL that we define as making improvements for an existing cohort of students while simultaneously reducing teacher time commitment or resourcing. As educators have limited time to implement improvements to assessment, this is a particularly appealing form of scaling up AfL. Doing more with less is not new, and although technology is the enabler here, pedagogy can also be its enabler, for example, Boud (1995) included a ‘more with less’ argument when justifying self-assessment.

However, with respect to technology-enabled feedback, what is ‘more’? Measuring in terms of volume of comments betrays a ‘telling’ conception of feedback, one that focuses on information transmission from teacher to student. This is a popular conception of feedback, and it is embodied in national surveys of students (Carroll, 2014; Higher Education Funding Council for England, 2014), which asks if ‘The staff put a lot of time into commenting on my work’ (CEQ ) or ‘I have received detailed comments on my work’ (NSS). However, over the past few decades, thinking about feedback has moved beyond a focus on information transmission and inputs towards a focus on change (Ramaprasad, 1983; Sadler, 1989). Comments on student work are only hopefully helpful information; they only enable feedback when they lead to change in learners. When this information does not lead to change, it is merely ‘dangling data’ (Sadler, 1989, p. 121). With that conception of feedback in mind, what might ‘doing more with less’ look like? Technology may allow us to provide more information, but this in and of itself does not generate more feedback.

For change to occur in learners in response to feedback information, the students first need to access the information. Against this hurdle, technology-enabled feedback appears to do more with less. For example, when compared with written paper-based feedback, students were ten times as likely to access audio feedback in one small-scale study by Lunt and Curran (2010). However, as noted by Fawcett and Oldfield (2016), the literature used to support conclusions around increased student access makes the outdated comparison of high-tech versus no-tech and often has research design problems. It is therefore unclear if these approaches result in increased access rates (the ‘doing more’ part) or just reduce teacher workload (the ‘with less’ part).

Student preferences and experiences of new feedback media may also suggest that these approaches enable ‘doing more with less’, in that they provide students with more of what they want. Existing research mostly supports the notion that students prefer these new approaches. This research uses a range of approaches, with emphasis on qualitative and nonexperimental quantitative designs. The one study employing an experimental design, which randomly assigned students to receive feedback information through different media, found no significant difference in the student experience between audio and written comments (Fawcett & Oldfield, 2016). Although they do not provide convincing detail or comparison of the structure or content designs of the text or audio interventions, their finding suggests that novelty effects or unfair comparisons may be partially at play when conclusions are drawn about student preference. Regardless, the suggestion that student preferences are necessarily a sign of improved learning is dubious; for an accessible dissection of the ‘myth’ that ‘the more they like it, the more they learn’, see Clark (2010). While we value the student experience, on their own student preferences are not enough to say that new media enables doing more feedback with less.

It is also possible that a media switch may lead to improvements in the quality of feedback information. This may be verbal or nonverbal; audio feedback may provide prosody and intonation, or video feedback can capture facial expressions and gesture, which may lead to information value beyond words. Qualitative research suggests students can experience video feedback as more real, honest and authentic (Henderson & Phillips, 2015), lending support to this argument. However, there is also the possibility that this more embodied mode may lead to tentativeness, leniency or being less critical (King, McGugan & Bunyan, 2008). Although we would never advocate for being nasty, we would similarly never advocate for being confusingly nice either; approaches like the ‘feedback sandwich’ can hurt rather than help learning. Students come to expect the predictable ‘mandated linguistic ritual’ of the sandwich, with flattery largely ignored by learners seeking meaningful criticism in the middle of the sandwich (Molloy & Boud, 2014). For further critique of the sandwich, see also the chapter by Ajjawi and colleagues, this volume.

Evidence suggests that a media switch, on its own, is unlikely to improve feedback. There is a strong tradition of ‘media comparison studies’ in education, which compare learning outcomes for students across two or more different media conditions with the same instructional design (Russell, 2013). Although individual studies sometimes show better outcomes for one media condition, when the results of hundreds of these studies are pooled together, the overall result is one of no significant difference (Russell, 2013). This tells us that switching from lectures to videos, textbooks to audio books or face-to-face groups to online groups, but not changing our pedagogy, will most likely not lead to better learning (Russell, 2013). This of course assumes the technology functions well and everybody involved knows how to use it.

In educational technology research more broadly, there is compelling evidence that instructional designs, learning outcomes and assessments need to be tailored to suit the new media for educational technology to improve learning (Means, Toyama, Murphy, Bakia & Jones, 2010). The same may be true for feedback designs. Fortunately, the literature has much to say about how to improve feedback designs and proposes several useful models that lend themselves to technology enablement. For example, Boud and Molloy’s Feedback Mark 2 model (Boud & Molloy, 2012), which involves an iterative dialogue between students, peers and teachers, could benefit from the logistical enablement provided by online peer review tools to enable peer feedback.

So, do technology-enabled feedback media switches let teachers ‘do more with less’? The evidence is stronger for the ‘less’, than the ‘more’, depending on which ‘more’ you mean. If you want more ‘feedback’ (i.e. more change and improvement) then you should change the feedback design, not just the media.

Scaling Up AfL Thinking from Units/Modules to Programmes/Courses: Technology-Enabled Portfolios and Curriculum Mapping Tools

Assessment thinking has moved from a focus on the immediate needs of a single task or module-level learning outcome, towards a parallel focus on programme-level outcomes. For students and teachers, this can be a complicated task that is cognitively taxing. Where a student’s assignment may have been a stand-alone artefact in the past, constructed for the immediate needs of the task and then discarded, it now is likely to be collected into a portfolio tool. Where the intended outcomes of a task were once stand-alone, they now form part of a mapped-out curriculum that can be viewed at a macro- or micro-level. The AfL strategy of productive assessment task design (Carless, this volume) therefore now operates beyond the immediate task at hand and forms part of an integrated and coherent suite.

Portfolios and curriculum mapping are obvious candidates for technology enablement even though they do not require any computerization at all. Both approaches require arduous administration, matching tasks up with unit outcomes, generic graduate learning outcomes with specific degree level outcomes and storing and reporting on large datasets. In a resource-strapped modern university, technology may make portfolios and curriculum mapping cognitively and logistically feasible, enabling the scaling up of AfL thinking from individual tasks up to degree programmes.

The implementations of these tools are large scale too. Portfolios are increasingly marketed and implemented as faculty-wide or institution-wide interventions with long-term agendas (Posey et al., 2015). It would be reasonable then to expect substantial evidence that portfolios enable scaling up of AfL thinking, at a mass scale. However, research investigations into eportfolios tend to focus on small-scale, short-term, self-report data on student preferences that are not particularly useful beyond an immediate context (Rhodes, Chen, Watson & Garrison, 2014). This represents a curious flipping of Selwyn’s ‘state of the art versus state of the actual’: while the rest of the educational technology literature is obsessed with the bleeding edge, the research literature is currently trailing behind the state of practice in eportfolios. Although portfolios can possibly enable this level of thinking, there is little evidence that they commonly achieve this at scale.

Similarly, curriculum-mapping approaches are increasingly being undertaken through software as part of large-scale Assurance of Learning programmes. Like portfolios, these are large-scale endeavours and very common in some parts of higher education (Lawson et al., 2015). Assurance of Learning aims to ensure that on successful completion of a programme of study, all students have met the stated outcomes.

Lawson et al. (2015) report on interviews with leaders from 25 Australian business schools that conduct Assurance of Learning, finding that assessment, workload and time burden were the most common challenges. We know much about the potential of eportfolios and curriculum mapping tools to improve education – but little of the challenges of large-scale implementations apart from war stories revealed in confidential interviews (e.g. Lawson et al., 2015). The drivers of adoption of these tools may also be not as bright as we would hope; behind the agentic and authentic rhetoric, there may be accreditation and accountability agendas. For example, Assurance of Learning in Australian business schools is driven by external accreditation, government audit and professional body requirements. Although these programmes may have ostensibly had formative goals, ‘the actual practice was mostly use of summative assessment’ (Lawson et al., 2015, p. 589).

Although tools like curriculum mapping and eportfolios may possibly enable the scaling up of AfL thinking, they do not make it automatically happen. Improving assessment towards AfL is challenging, and it is quite likely that large-scale portfolio and curriculum mapping implementations face similar challenges to other assessment interventions, e.g. resistance, complexity and assessment literacy (Deneen & Boud, 2013; Macdonald & Joughin, 2009; Price, Rust, O’Donovan, & Handley, 2012). Providing the technology tools and bureaucracy of scaled-up assessment thinking will not on its own scale up AfL; cynicism and ‘box ticking’ must be overcome (Lawson et al., 2015), which undermine AfL. Even if educator thinking about assessment is changed, changes to actual assessment practice are not guaranteed (Offerdahl & Tomanek, 2011).

As with educational technology in general, the social complexities surrounding eportfolios and curriculum mapping tools seem to be as influential as their technological affordances. Technology can reduce the paperwork burden, but addressing pedagogical and curricular challenges is more difficult. Technology can enable scaled-up AfL thinking, but it does not on its own scale up changes to AfL practice to achieve productive assessment task designs.

Scaling Up AfL to Serve an Infinite Student Body: MOOCs as AfL

Massive Open Online Courses (MOOCs) are a relatively new form of online course that provides free access to education for students around the world. MOOCs operate at a massive scale, with tens of thousands of students being enrolled in some courses (Hollands & Tirthali, 2014). In a typical MOOC, content is delivered through video lectures and readings, and students work through a variety of computer-based learning activities.

It is possible to characterize MOOCs as an AfL enterprise in that they are largely structured around meaningful tasks that provide (usually automated) feedback on performance and progress. We recognize that doing so is somewhat contentious, so in the following section, we systematically compare MOOCs against AfL as defined in this volume and then ask what the MOOC experience can tell us about scaling up AfL in general. In addition to transmission teaching moments (video lectures, readings, etc.), MOOCs often include activities that in regular higher education would fall under the AfL banner, for example, formative quizzes, computer-based assignments that provide rapid feedback and online peer facilitated discussion.

MOOCs are largely built around tasks we class as assessment. Taking Joughin’s definition, we view assessment as ‘[making] judgments about students’ work, inferring from this what they have the capacity to do in the assessed domain, and thus what they know, value, or are capable of doing’ (2009, p. 16). The automated delivery environments employed by MOOCs are constantly making judgments about student work, ranging from unsophisticated evaluations of the correctness of their responses to multiple choice questions, to more complex analytics of student online activities aided by artificial intelligence. These are used to make inferences about student capability, leading to the award of certificates on successful completion of the course. MOOCs thus clearly involve assessment, even if a human assessor never sees the student’s work.

MOOCs may involve assessment but do they meet this book’s definition of AfL and employ AfL’s key strategies? In the opening chapter, AfL is defined as follows:

Assessment for learning is any assessment for which the first priority in its design and practice is to serve the purpose of promoting students’ learning (Black, Harrison, Lee, Marshall, & Wiliam, 2004, p. 10)

In mentioning the issue of priorities, Black et al. (2004) make a nod to the fact that assessment has a variety of purposes to serve; a single act of assessment may serve many purposes (Boud, 2000). Since MOOCs generate certificates of completion, it could be possible to conceive of MOOC assessment’s primary purpose as certification. These certificates do, however, generally hold low status. In response, many MOOC providers offer certificates that can be used for credit in higher education institutions, but these are usually offered only on completion of an additional assessment conducted under more stringent conditions. That additional assessment has the explicit purpose of credentialing; it is clearly an Assessment of Learning event. The remainder of the assessment within an MOOC serves as preparation and guidance towards such an event.

Comparing MOOC assessment against the synthesis of main AfL strategies and processes at the commencement of this book shows the potential for strong alignment. AfL employs productive assessment task design, which is underscored conceptually by constructive alignment (Biggs, 1999). MOOCs can be highly modularized, with clear instructional goals for each section and sequences of learning activities intermingled with low-stakes assessment events that correspond to those goals. Students may be allowed to retake these low-stakes assessments until they are happy with their level of performance.

Assessment for Learning is also underpinned by effective feedback processes. As a resource-constrained teaching mode – resourcing is usually not proportional to the number of enrolled students – little of the feedback in MOOCs comes from human teachers. There is a rich body of research on computer-supplied feedback, and it supports the effectiveness of high-quality feedback on multiple-choice questions (Lefevre & Cox, 2016); when this mode is used without any feedback at all, it can lead to learning untruths (Marsh, Roediger, Bjork & Bjork, 2007). In addition to often providing immediate feedback on multiple-choice questions, MOOCs also involve rich tasks with intrinsic feedback. For example, one author of this chapter is currently studying the ‘R’ statistical programming language in a MOOC, which features regular small assignments on which he receives detailed feedback information every few minutes. Peer feedback is also a common feature of MOOCs, which provides a clear tick in the AfL column. However, the motives for using peer feedback are typically driven by resourcing (e.g. Piech et al., 2013), and peer feedback often forms part of a summative peer assessment process; negative responses by students or educators to such an approach are unsurprising (Liu & Carless, 2006).

Students in many MOOCs have opportunities for practicing making judgments, another key strategy for AfL. MOOC research has focused intensely on a variety of judgments: self- and peer-assessment, self-determination, self-regulation and self-direction (Gasevic, Kovanovic, Joksimovic & Siemens, 2014). For better or worse, the free, open and impersonal nature of MOOCs may require a degree of self-regulation not common to traditional face-to-face or online courses.

MOOCs that involve peer feedback and peer assessment require students to make quality judgments. However, the degree to which MOOC students are capable of this, or supported to develop their skills with quality appraisal, has not been very well explored. Exemplars may be provided; however, there is no evidence that sophisticated pedagogies that utilize this as an opportunity to develop evaluative judgment are common. MOOCs do not systematically use the AfL strategy of developing student understanding of the nature of quality.

MOOCs thus are capable of conducting AfL and employing several key AfL strategies – and doing so in a way that scales without additional resourcing. This presents the obvious question: what can the AfL movement learn about scaling from the MOOC experience?

If we take scalability as a function of resources required to serve a particular student body, we see that different educational approaches scale differently. The audio/video feedback approaches described earlier scale almost linearly, which is to say that roughly twice the resources are required to provide feedback to twice the student body. MOOCs are required to scale nonlinearly: an MOOC with 10,000 students does not receive 10x the resourcing it would if it was taught to 1000 students. MOOCs inarguably scale in a nonlinear fashion; however, the degree to which they are AfL depends on the relative priority given to assessment’s learning purposes and the strategies used to support student learning.

MOOCs that achieve AfL that scales nonlinearly do so through frontloading AfL resources into educational design, rather than marking or feedback time. AfL scholars already urge us to rethink our resource allocation towards where it best supports learning (Boud, 1995). The MOOC experiment suggests that when student numbers are huge and resources are not, we should invest heavily into design. MOOCs cost tens or hundreds of thousands of dollars to offer, the majority of which is invested in design and development (Hollands & Tirthali, 2014).

Where feedback scalability improved the educational experience for existing students, MOOC scalability relies on the assumption that ‘something is better than nothing’. MOOCs provide educational access to students who previously lacked it. However, the low completion rates of MOOCs (ranging from 3 % to 15 % in Hollands & Tirthali, 2014) tell us that in scaling up access to AfL, the resource-constrained approaches taken may be simultaneously scaling down success. To borrow the mantra of the higher education student retention community, ‘access without support is not opportunity’ (Engstrom & Tinto, 2008). Access to AfL is not the same as a supported AfL opportunity. Although the MOOC experience has demonstrated AfL can scale infinitely, it may be doing so in a way that runs counter to the aims of the AfL community.

Conclusion

In this chapter, we have showcased three approaches to using technology to scale AfL. By switching the media used to deliver feedback, technology enables educators to improve the feedback experience for students without spending more time on marking. By supporting us to think bigger, technology can enable programme-level thinking about assessment with portfolios and curriculum mapping. By changing the relationship between student numbers and resourcing requirements, MOOCs allow AfL to be offered to tens of thousands of students at a time.

A key message in this chapter is that technology does not provide a simple ‘out-of-the-box’ solution to scaling up AfL. Black and Wiliam’s (1998) landmark review study established the importance of context in AfL, and it appears that contextual influences are enduring. The layering of technology in any educational context inherently changes the practices involved, including the production, consumption and interaction of both educators and students. Moreover, the sheer complexity of education, not least the diversity of teachers and students, teaching and learning and policy and institutional culture, means that a successful design for technology enablement of AfL in one context is unlikely to automatically succeed in the same way in another context. As a consequence, it is imperative that AfL designers remain productively wary of technology innovations and the promises of potentiality that surround them. This critical perspective allows us to acknowledge the potential for technology enablement but affirms the need to critically redesign such approaches according to specific contexts and goals.

Indeed, a critical perspective of the three approaches presented in this chapter for using technology to scale up AfL has revealed three key issues that technology-enabled AfL designers should consider.

First, technology-enabled AfL interventions need to be guided by clear goals. Across the three examples explored in this chapter, the intended outcomes were not entirely clear. Were feedback media switches meant to improve learning or just student experience? Are portfolios and course mapping tools for improving learning or compliance? And are MOOCs meant to improve opportunities for AfL or just access? Any technology-enabled AfL intervention in research or practice needs clear goals and a strategy to evaluate if these goals were met. In the absence of clear goals and evaluation plans, an outsider could reasonably suspect technological determinism: that this ‘AfL’ intervention is being driven by technology rather than pedagogy.

Secondly, technology-enabled AfL interventions should pay attention to relational and contextual matters. Assessment change is hard enough; when the additional conceptual shift towards AfL is added, it becomes more challenging, and when technology is introduced, the problem becomes more challenging still. Even high-quality portfolio or curriculum mapping tools can be defeated by ‘box ticking’ approaches by staff and students if they lack the time or support to fully engage. Technological affordances are just possibilities; changing technology without addressing underlying organizational matters is likely a doomed approach.

Thirdly, educators and institutions need to invest in improvements to assessment design. Key meta-analyses of educational technology (Means et al., 2010; Russell, 2013) concur that adding technology to an existing design and expecting improvements is a flawed approach. The feedback and MOOC examples show that investing in improved assessment designs (including feedback designs) is necessary to leverage the gains from technology.

Although we have taken a critical stance through this chapter, our conclusion is fairly positive and aligned with the principles set forth in the opening chapter of this book. Technology may support AfL in its quest for scalability; however, as with AfL in general, productive assessment task design and a concern for the people involved are crucial.