1 Introduction

New technologies present social and ethical challenges. Technology is not neutral; the interaction of people with technologies creates new ways of perceiving, thinking, doing and relating. Higher education assessment is no exception; old assumptions cannot be relied upon and a range of hitherto unforeseen ethical and social dilemmas are emerging as universities slowly embrace technology-mediated assessment. This chapter explores the social impacts associated with overlay of assessment and digital technologies and considers some of the ethical choices afforded to us as academics, assessors, researchers and learners. Ethical considerations permeate all higher education contexts; while they may not be anyone’s initial and immediate concern when thinking about assessment, they are important to ensure we are not harming our students, ourselves and our societies.

Society is being changed by digital technologies. We live in a digital world, where technology is often heralded as being innovative, exciting and future-oriented. At the same time, chatbots tweet offensive and malicious falsehoods (Neff and Nagy 2016) and Facebook data can be used without consent (Solon 2018). Reflecting this tension, discourses of technology in higher education polarise into doom and hype (Selwyn 2014). However, irrespective of pessimism or optimism, “digital education is undeniably a ‘big deal’” (Selwyn 2016, p. xviii) Moreover, not all technologies are necessarily educational; university students’ off-task use of technology may be ubiquitous (Aagaard 2015). In short, digital technologies are both impactful and omnipresent in higher education settings.

Assessment also has significant impacts upon students, albeit of a more traditional kind. Assessment’s importance is illustrated by the often repeated mantra “assessment drives learning”. However, this nostrum doesn’t capture the social complexity underpinning assessment and assessment practices of both educators and students. Assessment concerns standards, knowledge and certification. As such, it shapes university students in many different ways (Hanesworth et al. 2018) and similarly, it also shapes academics (Raaper 2017). Assessment also ‘legitimises’ certain forms of knowledge (Shay 2008). At a simple level, assessment acts as a ‘gate’ – to pass an assessment may provide entry to future employment, status and other opportunity. In short: assessment is power. It legitimises knowledge and has the capacity to include, exclude and ‘discipline’ (to borrow Foucault’s term) students and academics alike (Raaper 2016).

The intersection of assessment (powerful) and technology (omnipresent) therefore deserves close scrutiny. Waelbers (2011, p. 138) suggests there is a need to take moral responsibility: to “take up the task of enquiring into the social impacts of technologies”. Others have taken up this challenge with respect to the ethical and moral use of technology in higher education (e.g., Selwyn 2014) with possibly the strongest focus in the area of data analytics (Slade and Prinsloo 2013), which argues that there is a moral imperative to act as well as to avoid using student data (such as assessment data) without permission. There is also a substantial literature on critical digital literacies, which reveal significant and diverse thinking about the ethical, political and ideological nature of digital literacies across all levels of education (Pangrazio 2016; Potter and McDougall 2017). However, we seek to conceptualise issues particularly pertinent to the intersection of technology and university assessment, which we believe have not been discussed elsewhere.

We think this is particularly pressing due to the tremendous power exerted by assessment. At the same time, this is not an easy task because new technologies mean there are no precedents. Waelbers (2011, p. 67) notes: “falling back on established moral rules is difficult when developing new technologies or employing new technologies in new manners.” She exerts us to use our “moral imaginations”. Therefore, we encourage readers, as they consult other chapters in this book, to be mindful of what else might occur, outside of the promising and positive results, and to think of alternate scenarios which might eventuate within different contexts. If we are not looking for the challenges, the barriers, the equivocal results and the potentially damaging outcomes, then these unanticipated and undesired consequences may become invisible.

The aim of this chapter is to explore some of the social impacts, moral responsibilities and ethical dilemmas presented by digitally mediated assessments. These affect students, educators, administrators, policy makers, software developers, researchers and the broader communities which the academies seek to enhance. Ethical frameworks in healthcare are concerned with notions of the capability of individuals to make decisions about themselves; seeking to provide benefit and avoiding harm; and the just distribution of benefits (Beauchamp 2007). We see these having particular relevance to digitally mediated assessment in higher education around three topics: a) agency, b) diversity and c) labour. Embedded within these, we present three practical propositions to inform ethical design and implementation of digitally mediated assessments. Our concerns and propositions are clearly not exclusive, there are many others and there are many deeper philosophical and moral readings of our current digital world (see for example Waelbars (2011) or Pangrazio (2016)). Our thoughts are strongly influenced by its framing within our own context (higher education in metropolitan Australia) and we suggest readers could explore the social and ethical conundrums within their local contexts.

2 Agency: Managing the Challenge of the ‘Black Box’

In higher education as in healthcare, the ability of individuals to make decisions about themselves is an important ethical principle. Recognising the interdependent nature of higher education, we believe this goal is better framed as agency. In general, agency is practiced when individuals or their communities “exert influence, make choices and take stances in ways that affect their work and/or their … identities” (Eteläpelto et al. 2013, p. 62). We regard agency for students as being a pedagogically and socially important aspiration. It is not controversial to suggest that students should have a degree of agency over their assessment work in accordance with calls for students as “responsible partners” in assessment (Boud and Associates 2010). This also aligns with Slade and Prinsloo’s discussion of ethical learning analytics (2013). They contend that student data and associated conditions should always be made known to students and students should be given a degree of agency over how this data is generated, manipulated and used (Slade and Prinsloo 2013). So there is general agreement from both the assessment and technology literatures that student agency is both desirable and necessary.

Enacting these principles presents significant challenges. Technology-mediated assessment are not necessarily enabling better practices. The affordances and constraints provided by particular platforms are frequently obscured. We suggest therefore that with respect to technology-mediated assessment, agency is not just a matter for students but also for educators.

Educators can struggle to understand educational technologies. Sometimes educators are aware of an explicit ‘black box’, such as automated marking software (see Bearman and Luckin, Chap. 5 this volume). More frequently, the constraints of the system simply become part of the taken-for-granted nature of technology. For example, learning management systems and plagiarism software can implicitly constrain both educators and students through enabling certain (often simpler) types of assessment and making it more challenging to innovate or expand different kinds of assessment. Thus quizzes become even easier and groupwork becomes even harder to implement. In general, the constraints of technology may not be apparent to academics who have a complex and sometimes romantic view of technology in assessment (Bennett et al. 2017). If educators are struggling to acknowledge and understand the technologies or algorithms underpinning their assessments, how can they begin to explain them to their students? We contend that a necessary step towards agency is for both educators and students to sufficiently understand technology and assessment to make informed choices.

2.1 Proposition 1: Build Necessary Skills and Literacies Within Technology-Mediated Assessment

In order to make informed choices about technology-mediated assessment, educators and students must be able to ‘read’ both the technology and the assessment formats. There is an urgent need for such capabilities. As highly complex forms of technology such as automated marking become more prevalent, the opportunity for “black box” algorithms becomes greater, which increases the risk of unknowing inappropriate use, particularly as educators and students do not have any prior experiences with these digital innovations. Our choices as educators and students are generally predicated on previous experiences of assessment. For example, an educator who has only ever set paper tests for undergraduate school-leavers may have a reduced set of design considerations, when compared to a colleague who has spent time considering the assessment needs of a diverse cohort, with a blend of online and face-to-face students. Similarly, a student who does not understand what a wiki is nor why it might be used for assessment, may struggle to approximate the specific genre presented to them, and therefore resort to mechanistic approaches in order to maximise achievement. In this way, the data created by learning analytics or automated marking processes have the potential to be misunderstood and misapplied by all parties as there are no prior contexts to help shape understanding.

Building skills and literacies is a means to promote greater agency for both educators and students. In this way, educators and students can both draw from a full repertoire of choices when doing assessment work. For the educator, it avoids the trap of developing assessments according to technological imperatives rather than pedagogical purposes. For the student, it provides increased ownership over the assessment process.

So what might these skills and literacies look like? At a simple level, ‘literacy’ can be considered the ability to ‘read’ and ‘write’. Elmborg (2012, p. 80) writes: “…To read something skilfully is to be literate. To be able to write something is to master the literacy.” We refer to this notion of literacy broadly as “skills”. At a more complex level, Freire and Macedo discuss how literacy is about reading “the world”; that being literate should permit “learners to recognize and understand their voices within a multitude of discourses in which they must deal” (Freire and Macedo 2005, p. 97). Literacies can be seen as a form of social change as much as skill development (Freire and Macedo 2005; Pangrazio 2016). This is a significant point and we don’t wish to promote a ‘transmission’ model of assessment or digital skills. Instead we take the view that digital and assessment literacies are about understanding the plurality of meaning within assessment forms and digital technologies. Literacies therefore are necessarily about social, ethical and political concerns within a context of skill development.

Students and educators already come with varying degrees of digital and assessment skills but it is important not to make assumptions. There is a view that digital skills are somehow different across generations. The myth of the “digital native” has been busted several times over (Bennett 2016; Margaryan et al. 2011; Selwyn 2009) and we should avoid misguided opinions of what “young people” like and are familiar with. However, what students and educators need to know about digitally-mediated assessment may not be easily itemised. Joint orientation to, and discussion of, technology and its purposes and functions in the assessments provides a useful way forward. However, building skills and literacies development into our assessment pedagogies may produce better traction.

We offer two contrasting examples to illustrate both the possibilities and the limitations. The first example – PeerWise –has been extensively scaled and serves to develop both digital and assessment skills. The second example – the Learning Analytics Report Card (LARC) – is a smaller scale approach, but which develops critical digital- and assessment-literacies.

The PeerWise software develops assessment skills through engaging students in the process of making and refining multiple choice questions (MCQs) (Denny et al. 2008). In this system, MCQs are used for learning purposes. All MCQs are student generated, and feedback can be provided to the question writers if there are errors within the question. When provided with such agency to contribute to assessment tools (albeit in a formative setting), students demonstrated the ability to detect errors and also agreed with faculty on the quality of questions (Denny et al. 2009). Subsequent work identified that students who participated more frequently in the process of creating MCQs also performed better on end-of-semester assessment (which did not contain MCQs) than those with low participation. This association was present for both high and low achieving students (Bates et al. 2012). The involvement of students in assessment creation is therefore likely to contribute to shared knowledge regarding assessment. The advantage of this approach is it develops skills regarding MCQs as well as how technology can mediate student contributions; the limitation is that it perpetuates a traditional form of assessment, which comes with many drawbacks.

The LARC is an example of building digital literacies that may hold promise for assessment (Knox 2017). In short, 12 students were given access to limited learning analytics about their behaviours during the course (e.g., attendance, interactions, time on task) and the capacity to manipulate what was presented back to themselves and at what time. The LARC offered regular reports, with language that deliberately lacked nuance and therefore had the air of being slightly provocative: e.g., “you have usually been extremely social during the course but in week 12 you interacted less with others.” The students were offered an opportunity to reflect on each ‘report’ and through this make personal meaning regarding the potential for this type of assessment. The LARC pilot aimed “to highlight the entangled and complicit conditions of data capture and surveillance, and to offer ways for students to both experience and reflect on forms of participation within the [learning analytics] process…” (Knox 2017, p. 50). The pilot data suggests that educators and students alike learnt through the LARC – about the constraints and affordances of learning analytics and computer generated feedback on assessment tasks. Indeed, the literacies are more than shared, they can be considered ‘co-constructed’ or jointly developed. However, the LARC is designed for those studying technology and society; this may limit its broader application.

These two examples indicate how skills and literacies inform agency with respect to technology-mediated assessments. They enable informed choices, both in the short and longer term. We suggest that it is worth considering the following when researching or designing technology-enabled assessments. See Box 3.1.

Box 3.1: Practical Considerations Regarding Skills and Literacies That Promote Agency with Respect to Technology-Enabled Assessment

  • What do educators and students know about the constraints and opportunities of the technology – and the assessment design?

  • What do educators and students understand about the broader social constructions of technology-mediated assessments?

  • How can low skill levels be accounted for?

  • What opportunities do educators and students have to build their skills? How about their literacies?

  • What impact might technological and assessment skills and literacies have on how assessment is designed, implemented and undertaken in a digital world?

3 Diversity: The Challenge of Inclusive Technologically-Enhanced Assessments

The promise of technology is that it can ensure greater access to higher education (Devlin and McKay 2016), particularly to those who have been excluded on the basis of: class, race, ethnicity, gender, disability, citizenship status or rurality. Technology-mediated changes such as online learning or text-to-voice readers may reverse some of this exclusion. However, there appears to be limited evidence gathered to date to support some claims. In other words, the availability of technology does not automatically enable diversity, although there certainly is the potential to reach greater audiences.

Many universities now have policy on accessible design for technologies. This is often based on the considerable literature exploring universal design for learning. Universal design offers both principles and processes for designing inclusive and fair assessments through thinking flexibly and providing alternatives to accommodate individual differences (Almond et al. 2010). This is important and laudable work but it may not be enough; over a decade ago, Hanafin et al. (2007, p. 438) wrote in the disability literature: “[higher education] assessment has both intended and unintended consequences, and consequences are greater for some students than for others.”

Unintended consequences are difficult to track, especially in relation to assessment. Discrimination already occurs in assessment; for example Esmail and Roberts’ (2013, n.p.) study of over 5000 candidates sitting a postgraduate medical examination indicated that “subjective bias due to racial discrimination …may be a cause of failure…” Such biases may be ‘invisible’ to educators and other students and therefore there is particular danger when it comes to technology. Moreover, technology itself can be a source of discrimination, as technology is not cheap and while there is an assumption that it is ubiquitous, this is only an assumption.

The power imbalance inherent in the system is unlikely to facilitate diverse students’ ability to appeal against or even point out these concerns by themselves. This power imbalance may be exacerbated by a rise in computational support to generate grades or provide feedback, as the underpinning algorithms tend to be derived from historical data. Though we consider the digital to be a facilitator of inclusivity, artificial intelligence tends to determine solutions according to previously submitted work. Thus it may continue to disadvantage diverse populations who do not fit previous patterns of achievement. How can we meaningfully ensure that inclusion is built in to our technology-mediated assessments and ensure that access to higher education is as broad as possible?

3.1 Proposition 2: Community Engagement Helps Identify ‘Invisible’ Barriers

Inclusivity benefits all (Hanafin et al. 2007). However, educators may be unaware of the potentially deleterious impact of our assessments, due to the necessary constraints of individual experiences as well as institutional systems. This means ethical frameworks such as the Open University’s call for data modelling and analysis to be: “sound and free from bias” (Prinsloo and Slade 2017) have some limitations. In plain terms, we do not know what our own (inevitable) biases are. It is therefore important to work with all potential stakeholders to consider potential barriers to participation, which may be the result of digitally mediated assessments. The engagement of diverse lay people in the design and implementation of technology mediated assessment – most likely at a course or departmental level – may help take account of those ‘invisible’ cultural and contextual factors. There is a long tradition of lay involvement in health care (Entwistle et al. 1998) and there may be lessons for higher education. Lay people may be better able to illuminate and question practices which are taken for granted within the academy, but less so in general society. Of course, consultation should also include students, across the demographics of the enrolled cohort, and faculty, such as tutors/markers, academic developers, technology support staff, lecturers, administrators and so on.

In addition to lay perspectives, the academy may also usefully seek information from industries outside education to inform better technology-mediated assessment practices. Health, military and other high-stakes fields have ongoing assessment requirements (Bennett Jr. et al. 2002; Castanelli et al. 2016), so may have innovated different forms of assessments which better deal with particular access issues. In some areas, they may gain from insights into university processes and so such consultation may provide bidirectional benefits. However, the ‘industry of assessment’ – the private enterprises who seek to make a profit out of assessment – also may present barriers to community engagement. Not all assessment is controlled by educators or even universities. For example, outsourcing assessment or grading to private enterprise is increasingly common. We suggest that the university sector must consider how to engage with these providers.

In summary, educators and institutions must examine themselves, in order to identify crucial dilemmas and potential pitfalls. This is not a fixed state as social norms shift and change; we may find old barriers fall and new ones emerge. Under these circumstances, the most ethical approach may be through repeatedly and genuinely interrogating technology mediated assessment practices.

This proposition leads to three questions for all those involved in digital assessment as outlined in Box 3.2.

Box 3.2 Practical Considerations Regarding Identification of ‘Invisible’ Barriers to Diversity in Technology-Mediated Assessments

  • What does enabling access really mean for technology-mediated assessment?

  • How do we prevent ourselves from promulgating entrenched discriminations in the process of developing or implementing technology-mediated assessment?

  • What opportunities are there for diverse stakeholders, including those outside of the academy, to have involvement in our assessment processes?

  • How do we support our increasingly diverse cohorts through technology-mediated assessment?

4 Labour: The Challenge of Assessment Products in a Digital Landscape

Assessment has traditionally involved students engaging in academic labour, the fruits of which were seen by educators and then kept by the student or discarded. Technology has changed this, by making it easier to store and duplicate the work students produce. This has created opportunities for software vendors to use student work to enhance the assessment process. However sometimes these players have additional motives. For example, academic integrity software vendors provide text-matching services, which have greatly reduced incidence of copy-paste plagiarism, however to do so they have kept copies of student work. Students have taken legal action against this practice, unsuccessfully claiming that it infringed upon their intellectual property (Bennett 2009). Business models are also being built around students sharing their assignments online with their peers; concerns have been raised about the legality of those approaches and the boundary between the student’s property and the university’s property (Rogerson and Basanta 2015). Both of these cases raise the question of who should be allowed to profit from students’ assessment labours.

In a digital world, students’ assessment labours can have effects beyond the boundary of the academy, and even beyond the domain of education and technology. While it is heartening to see students’ work have an impact for the community, this impact is not always positive. For example, Wikipedia has seen many student contributions, and there is a small body of literature investigating the effects of contributing to Wikipedia on students (Di Lauro and Johinke 2017). Despite the noble intent of those setting Wikipedia assignments, students often do more harm than good to Wikipedia, creating enormous work for long-term Wikipedians (Wikipedia 2018).

Assessment is intended to evidence what students are capable of, however students may need to be aware that in a digital world it may also form a permanent record of their mistakes. There may be secondary uses of this evidence against students. This has been recently illustrated by the case of Dr. Bawa-Garba, a junior doctor whose reflections collected as part of an educational e-portfolio may have influenced a court case regarding lack of fitness to be a medical practitioner (Dyer and Cohen 2018). This case illuminates the potential consequences when asking students to post on Twitter, create websites, or upload videos to YouTube for assessment purposes. Future employers of those students, and indeed, society, may look favourably or unfavourably upon these artefacts, depending on their content and tone. How then can we manage the social and commercial impacts of assessment labour, in a way that minimises their harms and maximises their benefits to students and society?

4.1 Proposition 3: Technology Mediated Assessment Should Inform, and Be Informed by, Institutional Policies

Unlike other challenges discussed thus far, questions of the ownership, access to, and impacts of assessment products are generally answered through institutional policies. These are essentially legal questions and concern intellectual property (IP) arrangements. With respect to IP the World Intellectual Property Organization (http://www.wipo.int/portal/en/) offers comprehensive access to IP policies across countries that describe how institutions manage IP arising from assessments. On a related note, these institutions generally describe strict privacy protocols as well.

However, there are still pitfalls. The use of technology for assessment will be constantly developing and changing well into the future and some types of uses (and potential benefits and harms) will not be covered by current policies. Innovative educators who work with technologies outside their institutionally provided tools can find themselves in grey areas as policy catches up (Bennett et al. 2017). There are also points where the university policy is relatively powerless. For example, a range of software, including plagiarism detection software and learning management systems have end user license agreements that require the “donation” of student data. In these situations, students do not have full control of their intellectual property. Moreover, legal frameworks are not the same as ethical frameworks. Recent events such as the Facebook and Cambridge Analytica data breach (Solon 2018) are clear examples of the evolution of what is considered appropriate to share, and the realisation that data are not as private or safe as they were thought to be. Constant vigilance may be required to stay abreast of new developments and modifications to license agreements.

Through considering the questions in Box 3.3, we urge educators and designers of digitally mediated assessments to think about how the digital products of assessment labour are being used.

Box 3.3 Practical Considerations Regarding Assessment Labour, Institutional Policies and Technology-Mediated Assessment

  • What interactions do students have with the broader community as part of this technology-mediated assessment?

  • What digital work do students produce as a consequence of this assessment and who else might see this?

  • What are the institutional policies, ethical or legal frameworks that may influence how this technology-mediated assessment is implemented and how assessment products are stored?

5 Conclusions

Technology will continue to mediate and influence the format and processes of assessment into the future. One of the greatest challenges facing those who work with technology, is how they take account of an ever innovating landscape. Technological determinism might suggest that all human actors are shaped by technologies (Oliver 2011) and that the digital controls us, rather than the other way around. We reject this view. Our belief is that individuals have many opportunities to exercise agency, and therefore should seek to be moral, and to ‘do good’ through their practices.

In this chapter we have presented three concerns facing us as academics, assessors and learners. There are many others. Readers may wish to invoke their “moral imaginations” to apply these to the other chapters within this book, or other work about assessment and technology. As you do so, we suggest you ask yourself three questions: Who controls? Who accesses? Who benefits?