Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Introduction

The process of learning is just as important as the measure of outcome of learning. Such a view is widely endorsed in an ‘assessment for learning’ movement that promotes formative assessment (Black and Wiliam 2009), and has been the cornerstone of educational development philosophy for many decades. Chapter 2 explored such interest in a learning journey or ‘distance travelled’ by a student. In higher education in the UK, there is growing interest in the idea of ‘learning gain’ as giving much more information about both learner progress and the quality of teaching than single grades and marks alone (Higher Education Funding Council for England 2015).

The chapter also explored different ways of judging learning gain. Measures of learning gain using examinations and other quantitative assessments are fraught with difficulty and much effort goes into convincing stakeholders of the reliability of marks and grades (Hughes 2014). Feedback on assessments can also provide rich information on individual learning gain instead of, or in addition to, quantitative measures. However, a modular course design tends to discourage feedback that looks to the future (Hughes et al. 2015) and might make it difficult to track learning gain.

One potential solution to capturing the process as well as outcomes of learning is use of digital technology. Recording of marks in digital format is now commonplace, but this chapter focuses on recording feedback. The chapter begins with a discussion of how technology might support an ipsative approach to formative assessment through making feedback more accessible to both assessors and students.

The chapter will then explore a case study of the use of digital technology to review student progress in a taught postgraduate research programme using a tool to generate a feedback history record for each student. Taught postgraduate students at the UCL, Institute of Education, University College, London, submit their coursework online to a Virtual Learning Environment VLE. The current system enables students and staff to view the marks and feedback for each module of a programme, but they do not get a sense of the complete learning journey of the student. The VLE was modified so that an assessment report could be generated which made a student’s marks and feedback for all assignments on all modules easy to view in one place. A feedback response form was also provided to help students reflect on what they learnt from feedback. Thus, the programme leader, the teaching team and supervisors would have an overview of the progress of each student during the taught phase of the programme. The intentions were that progress, or lack of progress, would be more visible so that students could make use of past feedback and demonstrate progress in the next assignment and that border line pass/fail cases could be reviewed in detail by a programme leader.

Introduction of a new technology may produce a combination of intended or unintended positive or negative effects that require local monitoring. The feedback recording processes were piloted with a group of staff and students who were interviewed about using the feedback history report and the feedback response process. While some users of the system – including students – could see the potential for feedback and performance monitoring, others raised further technical and pedagogic questions.

Although the issues raised are specific to this case study, there are some more general points that emerge. The chapter will conclude that simply making a new technology available does not cause change: use of technology is a social activity and influenced by values, custom and practice and beliefs about learning (Oliver 2013). A key lesson learnt is that it is easy to underestimate the complexity of responses from different stakeholders when introducing a simple but radical idea.

Use of Digital Technology to Enhance Ipsative Formative Assessment

Ipsative Formative Assessment

It is widely recognised that at all levels, learning depends on appropriate formative assessment activity. This means that learners have opportunities to engage with feedback on their work whether from a teacher or from peers or from a self-evaluation (Black and Wiliam 2009; Molloy and Boud 2013). Feedback may be written or verbal or experienced from the learning environment (Laurillard 2012) such as a child touching something that is hot and quickly withdrawing. Irrespective of the source of feedback, there are many forms that feedback can take: it can be corrective, critiquing, praising, interrogating or developmental (Orsmond and Merry 2011; Hughes et al. 2015).

We saw in Chap. 2 that there is wide agreement that feedback should have immediate application and as a consequence feedback is often produced with a short-term developmental and often corrective aim in mind. But such a quick fix may not give learners a sense of a learning journey over time. Hughes (2014) has proposed that feedback can help a learner with an overview of not only where they are now, but also how far they have travelled and what are the appropriate next steps and goals. Focusing on progress rather than outcomes can be motivational for all learners, especially weaker learners, and even those who achieve high marks can be encouraged to raise their game.

It could be argued that grades or marks provide learners and teachers with a measure of learning gain over time and a visible means of marking progress. But to measure learning only as an increase in grades or marks is broad brush approach that omits detail of the obstacles that learners have overcome and the areas in which their skills and knowledge have blossomed. Furthermore, an increase in formally recorded grades or marks may be an unachievable goal for many learners; it may be that standards and expectations rise at the same rate as the learner becomes more proficient in the discipline. Thus, in outcomes-driven assessment any learning gain may be obscured. In schools learning gain may sometimes be explict through identified levels of literacy or mathematics, but in other forms of education a student might continue for long periods with a succession of demoralising or mediocre marks that give little indication of the development and learning that is taking place.

One way to give learners and assessors a much richer picture of the progress they are making is to ensure that feedback includes explicit references to progress – or if necessary lack of progress. This is ipsative feedback – feedback that refers to the learner’s previous work or learning gain. Ipsative feedback might, for example, inform a learner about their responses and actions in relation to previous feedback. But, for feedback to be ipsative in this way it is not sufficient to consider a piece of work in isolation, an ipsative approach to formative assessment requires consideration of progress over time and several iterations of learning a particular skill or disciplinary requirement (Hughes 2014). It does not make sense to equate a short-term improvement with what might be a temporary ‘blip’ in progress: information on progress needs to be repeatedly gathered.

There is a big problem here. While grades and marks are recorded formally and can be made visible to students and assessors, feedback is usually hidden and if it is recorded that is done locally. Students may keep records of past feedback but, unless they are encouraged to assemble a picture of their development over time in for example a portfolio or log book, past feedback is easily ‘lost’ and if it is looked at once, it may never be referred to again. Teachers may keep personal records of feedback that they have provided for individual learners, but again records are easily displaced or not easily accessed or stored in one place. Teachers and students alike may rely on memory about verbal feedback with all the associated difficulties of accurate recall. Teachers also rarely have access to feedback from other sources such as peers or other colleagues so synthesising evidence of a learner’s progress over time would require effort beyond what is normally expected.

The obvious solution to the invisibility and inaccessibility of feedback in this longer term view of learning is to capture and store feedback in a centralised place. The question then becomes one of which technologies might enable a feedback ‘history’ of each learner to be recorded and accessed.

Use of Digital Technology to Capture Feedback and Make It Accessible

In an age of digital learning, it is not surprising that feedback is becoming digitised. In higher education feedback no longer consists of a few scribbled and often illegible comments on a piece of writing or examination script: it is electronically produced using word-processing software and feedback pro-formas. Peer feedback and self-evaluation can occur in online discussion fora and feedback dialogue may occur in wikis, comments on blog postings, digital portfolios and other media (Rennie and Morrison 2013). Even verbal feedback can be audio-recorded and presented digitally.

But the digitisation of feedback does not necessarily mean that digital feedback is accessible and easily trackable over time to enable ipsative feedback. There remains the problem that feedback may reside in different locations which may or may not be accessible to assessors and learners. Ipsative feedback may still be a challenge in the digital world. What is needed is a technology that pulls together feedback for individual learners from many sources and presents it in an easy to access format. The obvious candidate is the widely used virtual learning environment, but these are not usually set up to capture feedback over time and if they are we know little about how feedback histories are being used, if at all. There are many questions a researcher might ask including:

  1. 1.

    What technologies can support the capture of feedback from different sources over time?

  2. 2.

    Does the capture of feedback from possibly different sources over time provide a useful ‘feedback history’ of a learner? If so useful for whom?

  3. 3.

    Does the accessibility of ‘feedback histories’ of individual learners facilitate an ipsative learning and assessment process?

It is worth noting here that capturing and storing feedback does not tell us anything about the quality of feedback and its relevance to overarching learning outcomes and the development of learner attributes. However, we could argue that making feedback by others – peers, teachers and self – more visible for comparative purposes is a vital step in improving feedback practice (Boud and Molloy 2013) and in developing learner understanding of the expectations of assessment (Nicol 2010).

A Feedback History Tool – How It Works and Its Potential Use

Our feedback history tool was developed rapidly as a proof of concept with minimal changes to our VLE, which is based on the Moodle platform, and taking advantage of existing functionality as much as possible. Moodle is one of the most widely used VLEs in higher education in the UK (Walker et al. 2013) and the world (Dahlstrom et al. 2014), and new third-party functionality can be added by developing a plugin, which is additional code that communicates with the VLE through standardised mechanisms. This way, the code can be transferred easily to other Moodle installations elsewhere, even though they might be configured differently.

The feedback history tool was developed in collaboration with the University of London Computing Centre (ULCC), who at the time of the pilot were hosting and maintaining our VLE. ULCC had already developed a flexible reporting plugin, to which we added our feedback history report, enabling a rapid release of the report to nominated pilot phase users, which were participating module tutors, programme leaders, and thesis supervisors. Therefore, our report only works with the ULCC reporting plugin, but the report code was later used for the development of the standalone student-facing MyFeedback Moodle plugin (Gramp and Neumann 2015).

Feedback History Report Components

The feedback history report lists the complete submission, grade and feedback history for a single student, as long as the details are stored in the VLE. Figure 6.1 represents a typical feedback history report, with multiple submission items per course, awarded grades according to the assessment item’s grading scale, the assessment type, as well as submission and due dates. Submission dates in italics would normally appear red to indicate a missed due date. Underlined text in the figure represents hyperlinks to the containing course, to the assessment item, to the student’s original submission file and to the teacher’s feedback for the relevant assessment item. Depending on the type of assessment, that is how submissions were uploaded technically, the behaviour of the submission and feedback links might differ.

The report can be ordered by each of the columns by clicking on the column title. It therefore provides an immediate overview of the student’s overall assessment performance, and teachers can quickly select the relevant items to review colleagues’ feedback on previous items, thus enabling them to detect trajectories or to comment on learning gain across submissions.

Fig. 6.1
figure 1

Feedback history tool

User Groups

The projected user groups for the report were:

  • personal tutors,

  • thesis and dissertation supervisors,

  • module teachers,

  • programme leaders,

  • academic administrators,

  • external examiners,

  • students.

In our pilot, we did not work with personal tutors and students, because the pilot courses did not have personal tutor arrangements and our tool could not be used by students for technical reasons.

The case for personal tutors and thesis/dissertation supervisors was clear: independent access to a review of the overall performance, or specific items, can save time in preparations for student meetings and enable these user groups to pick up on issues that the student might not report or identify, which in turn can improve the quality of the tutoring and the supervision feedback.

Module teachers are a group that needs due consideration. Access to the full assessment and feedback history would enable module teachers to implement a proper ipsative assessment strategy, as the tool would allow them to look back on, refer to and integrate previous feedback given to a student. This, however, assumes a non-anonymous marking policy, because the report would make it easy for module teachers to identify students, which might be problematic in some cases.

Academic administrators highlighted the usefulness of the compiled assessment overview as a comparator to check that data is consistent between the main registry database and the VLE. In the absence of an automatic synchronisation mechanism between these databases, still lacking in UK universities, the report simplifies manual checks.

Proposed Uses of the Tool

The tool enabled programme leaders to keep an overview of the status of submissions across multiple modules, and of the overall performance across their programme, even when a student’s module choice included elective modules from other programmes. Without the tool, a programme leader would need to ask relevant programme or module leaders in person for data, and might then receive paper-based information. The feedback history report streamlined this process and thus saved time.

Student use of the feedback history tool was repeatedly requested by module teachers; however, our pilot design was incompatible with this use. However, in a survey, students indicated that access to the report might be useful from a pragmatic perspective, although they seemed to be comfortable with the existing way of accessing their results and feedback. Staff, however, hoped that students would access feedback more or more often if it was compiled on a single feedback history page. Module teachers in particular highlighted that the report would complement a new feedback response form nicely as an additional pedagogical tool, allowing students to easily go back to previous feedback in order to respond more effectively to the feedback reflection form questions. Personal tutors and supervisors might prefer students to use the report in preparation of a tutorial meeting, as they would know better which items were relevant and which were not – thus taking work away from staff. Detail about these potential affordances of the feedback history tool and the related feedback response form is explored in the case study below.

The Case Study: Applying a Feedback History Tool and a Feedback Response Form to a Professional Doctorate Programme

The Programme

The Doctor in Education programme (EdD) is a professional doctorate which has been running at UCL Institute of Education since 1996. The structure of the content on the programme can be separated into two stages: a taught stage, which consists of three courses with an assignment to complete for each that build into a portfolio of practitioner research, and a research phase which consists of two pieces of independent research – the shorter Institution Focused Study (IFS) and then the Thesis. This pilot study was applied only to the taught phase.

The EdD programme recruits around 35 students a year, each of whom have a Master’s degree and at least four years professional experience in education. Many of those entering the programme each year are very experienced professionals; most have been out of formal education for some time. The taught phase of the programme is designed to help these experienced professionals to develop an academic research proposal for the research phase which will answer a problem of practice they have identified in their workplace. All EdD students are part-time and their research is usually embedded within the workplace.

The taught phase of the EdD consists of three courses. The first, Foundations of Professionalism (FoP), is designed to introduce the students to doctoral level study and provide insights to professionalism and associated theories in education. In the assignment, the students are asked to reflect on professionalism within their own area of education in light of the theories discussed. Feedback on this assignment is provided at two points. The initial draft submission provides the opportunity for the students to receive formative feedback in relation to the grade criteria and this is followed by feedback on the final piece of work. Possible transferable information in this feedback, with regard to the future courses, is largely about academic styles of writing and how to construct an academic argument.

The second course is Methods of Enquiry One (MoE1). In this course, the students are supported in developing their research focus and research questions. They are introduced to research design and strategy while thinking through the possible ethical implications of their proposed plan. In the assignment the students are asked to develop a proposal for their first piece of research, IFS. Once again they receive formative feedback on their draft and summative feedback on their final piece of work.

The final course is Methods of Enquiry Two (MoE2). In this course, the students are introduced to a wide range of research methods in education and social science. In the assignment the students are asked to undertake a pilot study for their IFS building on their proposal developed for the MoE1 assignment. The students again receive formative and summative feedback. The result of this design for the taught phase is that each assignment should ideally build on the previous one to support the students in their design for their IFS. The formative and summative feedback can be ipsative for each course and the feedback history tool should make it easier for the programme team to make feedback across the taught phase ipsative. A student feedback response form also aimed to encourage ipsative self-assessment.

Once the taught courses are completed, the students finish the taught phase of the programme by pulling together what they have learnt from the taught courses into a portfolio of practitioner research. This process involves looking back at the feedback received from all the formative and summative feedback, as well as the content of the course and assignments, to consider that they have learnt and how this will feed into their work at the research phase.

Historically, the building of the EdD portfolio was a paper based activity, with the student printing off copies of their three assignments and all feedback and adding this to a 2000 word statement. They would then give this to the supervisor to read. The supervisor would then complete the supervisor sign off form and this would be added to the other papers to be bound and submitted. This was then approved by the programme leader and passing the portfolio meant the student was ready for the research phase.

Introducing the Feedback History Tool

The feedback history tool was first introduced to the EdD programme team at a team meeting of seven academic staff, including the programme leader, as a potentially useful way of viewing student assessment data in the VLE. After a short demonstration they were asked how they might find the tool useful. The discussion was recorded with permission. Although the core team members are also supervisors, a further five of the supervisors who were not at the meeting were also introduced to the feedback history report and invited to comment on how they might use it. After being instructed by the programme leader on how to use the feedback history report, supervisors were also invited to further interviews but responses indicated that they had not used the tool so the interviews were not appropriate. Two student focus groups of a total of 18 students were also shown the tool and asked to comment on its value for them and any concerns about their assessors and supervisors using the tool. Students were also asked to comment on the feedback response form.

Programme Leader Experience

The EdD programme leader is tasked with the signing off of all the portfolios of practitioner research. In order to progress to the research stage the student needs to pass the portfolio with at least three C grades, although those obtaining three C grades or BCC at the taught courses (two students in this case study) are interviewed by the programme leader to see if they are suitable for the research phase. The EdD has an exit award (PG Diploma in Practitioner Research) for those who leave at the end of the taught phase and students with lower grade profiles are encouraged to consider this option.

There were two benefits of the feedback history tool for the Programme Leader. Firstly, signing off all the portfolios passes online made the process easier. The task could be completed anywhere where internet access was possible, as the physical portfolios did not need to be carried, and the tool provided a quick way to view the original feedback on the assignments.

The second benefit related to the identification of struggling students in preparation for the interviews. Two students were identified as BCC passes, no CCC passes were in this cohort. For the first case using the feedback history tool it was possible to see that they applied the formative feedback received to turn very weak drafts into passes. In the interview the programme leader was able to ask the student about their experiences in the taught phase and they said they had learnt a lot from the feedback, largely struggling with writing rather than a lack of understanding of the area. In light of the fact the student was able to react to feedback and was allocated to a supervisor who was able to support writing, the student was allowed to continue.

The second student clearly showed progression over the courses. Their first assignment just scraped a pass (C) but the second course was a good C and the third just scraped into the B band. It was clear from the grade profile and the feedback given that the student was improving; this was confirmed by the student in the interview that confidence had grown over the year as the student worked out the level of doctoral study. In light of the fact that the student was on a clear improving track, the student was also allowed to continue.

Overall, experience of using the feedback history tool as a programme leader was positive; it enabled the programme to make a big step towards being paperless and made tracking of student progress through the feedback significantly easier, which provided additional evidence to the student’s own view on their progress over the taught phase.

Supervisor and Teaching Team Experience of the Feedback History Tool

The feedback history tool looked to be a useful device for EdD supervisors. All EdD students have a supervisor appointed at the start of their studies, who they are encouraged to meet termly to discuss their evolving research ideas in light of the taught courses. Although the EdD supervisor would have been at the recruitment interview, most have very little contact beyond the termly meetings with their EdD students until the second year when they sign off the portfolio of practice. At this point the supervisor is required to read the 2,000 word reflective statement and look through the existing assignments with feedback in order to write their own short statement on their review of their student’s progress.

The supervisors who were shown the feedback history report agreed that it would be useful to have all the feedback in one place. One supervisor explained that it would help to get an overview of a recently transferred student:

Well for example I have recently taken on an EdD student, I don’t know if this would work.…I have taken her on quite late in the day and she is doing her thesis now but in the case that that feedback was available to me that would be inordinately helpful for me because I am new to her and her work and her style of writing.

The feedback history tool provided the team with the opportunity to remove the paper from the process and go paperless. Although there are many e-Portfolio programmes which can be used, the feedback history tool provided supervisors with a single place to access all of the submitted work and feedback, making it unnecessary for the student to print it. This was especially helpful for the third of our cohort based outside of the UK. The portfolio could then be completed with the student submitting online a 2000 word reflective statement and their supervisor statement. This substantially reduced the administration costs and time associated with the portfolio and of course reduced costs for the students in terms of the need to print and post the portfolio.

Supervisors were contacted to explain the new process, where the tool could be found and how it would be used to replace the paper-based portfolio. However interviews with supervisors after the portfolio was submitted suggested that many had not noticed the difference and found other ways, including e-mail, to get the portfolio materials they needed to assess.

In the team meeting where the feedback history report was presented to the team a senior staff member suggested that the feedback history report would be useful to enable them to review each other’s feedback but the emphasis here was on consistency rather than building on the feedback of others:

It would be very useful for making our feedback more uniform…the amount you write and the degree of detail because there is always the problem that some students say look I’ve only got half a side and he’s got two and a half. Consistency is better for students.

Such a statement hints at a management use for the tool in monitoring the quality and quantity of staff feedback – a point which re-emerges in the next section on student views.

Student Views of the Feedback History Tool

Students were generally in favour of the feedback history report as it might encourage them to look at past feedback:

Good idea to revisit old work so you don’t make the same mistakes and build on positive feedback.

At doctoral level, students not surprisingly suggested that they already review past feedback and the tool might only make this easier rather than prompt new behaviour.

There was some concern about the feedback of all students being visible to all members of the teaching team:

Staff may be anxious about putting feedback in writing if is more public.

This again demonstrates awareness of the potential use of feedback histories for quality monitoring of feedback.

Another student was concerned that a marker who could see past grades might be influenced but did not say whether or not this might also apply to past feedback:

As with juror’s not knowing a defendant’s past, markers seeing previous grades may be influenced.

While any parallels between trial by jury and educational assessment could be extensively debated, removing the marker’s access to past grades might easily dissipate this concern. For feedback history use the benefits of building on past feedback will need to be weighed against the possible influence of past feedback on grading impartiality.

Feedback Response Forms: Encouraging Cumulative Learning from Feedback for a Portfolio Assessment

While during the pilot the students did not have access to the feedback history report, the EdD programme team changed their assignment submission forms in time for the pilot. The purpose of the changes to the assignment forms was to introduce a reflection on the feedback process and support the students in building this into their end of first-year portfolio.

At the draft/initial submission stage the following was added to the assignment submission forms (Fig. 6.2).

Fig. 6.2
figure 2

Feedback response form for draft submission

At final submission stage the following addition was made (Fig. 6.3).

Fig. 6.3
figure 3

Feedback response form for final submission

The aim of the changes was to engage students in a feedback journey: to change a view that feedback was given to justify a grade into a view of feedback as an interactive process in which they had a stake. The final assignment of the year one students was a portfolio of practice which asked them to draw in their learning across the programme and provide a 2000 word reflective statement. The assessment feedback response forms were worded to acknowledge this portfolio to prompt engagement and subsequent use in the reflective statement of the portfolio.

Table 6.1 considers the number of times the student referred to their feedback in the 2013 portfolios, paper-based and before the use of the revised assignment form, and in 2014, with the electronic submission of the portfolio and the use of the feedback response form.

Table 6.1 Frequency of feedback discussed in the EdD Portfolios

On average the students in 2014 were more than twice as likely to mention their feedback as those in 2013 (on average, 2.06 mentions in 2013 compared to 4.80 in 2014). This increase in the average rate of discussion of their feedback is statistically significant at 1 % (t = 4.0053, p = 0.0001). Much of this difference is driven by a large reduction in the proportion of students not mentioning their feedback at all (from 38.9 % in 2013 to 10 % in 2014). This fall in the proportion of those not discussing their feedback is significantly different at 1 % (z = 2.6732, p = 0.0038). The introduction of the feedback response process seems to have helped to reduce significantly the number of students ignoring their feedback in their portfolio and to have helped to significantly increase the amount of consideration students gave their feedback. These results suggest that the feedback response form made a difference to the amount of space given in the portfolio to the student’s reflections on their feedback; however, it is possible that an increased staff and student discussion about feedback due to the piloting of this form and the feedback history tool also had an effect.

In addition to possibly increasing the frequency of the mention of feedback in the portfolios, the redesign of the feedback forms helped the students to engage with the feedback rather than report it as a reason for a grade. In 2013, most who mentioned their feedback did so to justify lower than expected or desired grades. By 2014, the discussion of feedback was much more related to what they had learned from it. This shift also changed how students felt about their feedback, with the 2013 cohort often reporting feedback that had aggrieved them while the 2014 cohort reported the feedback that impacted most on their learning. We might also suggest that student access to a feedback history report might further improve the student engagement with feedback by making the process of responding to past feedback easier.

Benefits and Challenges of Making Progress Visible through Feedback

Supporting and Enhancing Learning

Capturing feedback and making it easier to access has some potential learning benefits that have been evidenced in this case study. Three enhancements are digital efficiency savings, learning overviews and stimulating feedback dialogue.

First, moving to digital from paper-based provided efficiencies for staff. However, the degree to which technology can make existing processes more efficient will depend on the social context: some users might be resistant to a new technology because they perceive it to be difficult to use or not intuitive, while others may see savings of effort. The application of technology is not just about the affordances of the technology, that is what the technology enables users to do, in this case access past feedback; it is also about the social context of both the design and application (Oliver 2013; Wajcman 2015). In the case study the programme leader was enthusiastic about the feedback history because paperless assessment is faster to administer and efficient administration is part of the role, but the supervisors did not use the new technology and continued with previous practices perhaps because they did not perceive any immediate time saving for them and possibly were deterred by a need for investment in time to find out how to access the system.

Second, getting an overview of student progress can be valuable for helping make decisions about progression for struggling students or to see the assessment history of a newly acquired supervisee. In the case study there was evidence that a teacher can use past learning to help a student make decisions about future learning. This constitutes an ipsative view of feedback and learning where students are helped to build on past mistakes and limitations to develop in appropriate steps for that learner (Hughes 2014). So a decision on whether or not an apparently weak student should continue on the programme depends not on performance alone, but the progress they have been making towards expected goals.

Effective feedback helps the student see where they are now and where to go next (Hattie and Timperley 2007). Enabling students to be self-regulating, that is managing their own learning trajectory through responding to feedback, is potentially more powerful than teachers doing all the work (Nicol and Macfarlane-Dick 2006). Prompting to reflect on feedback can also really make a difference to student’s engagement and this is why it is widely recommended that feedback should be in the form of a dialogue (Nicol 2010; Orsmond and Merry 2011). The increased references to feedback in the portfolios of these students after the new feedback response form had been introduced does suggest that a systematic process set up to encourage reflection on feedback is useful and provides at the very least an internal dialogue or self-dialogue about feedback, and possibly further dialogue with tutors and peers. It might also be that simply drawing more attention to feedback through the teaching team explaining the new processes to the students might have a positive effect on how seriously students take feedback.

It is perhaps the combining of these two innovations: prompting to reflect on past feedback and making past feedback easy to access that is the most valuable way of taking the findings of the case study forward, particularly for students who may not be as highly motivated to access and reflect on past feedback as these postgraduate students.

Challenges of Introducing Unfamiliar Technology and Making Feedback More ‘Public’

Introduction of an unfamiliar technology that is not part of mainstream practice not surprisingly produced a range of responses. This is not simply about differences in technical skills in that younger students – digital natives – are more able and willing to adopt new technologies than their probably older supervisors and tutors. Helsper and Eynon (2010) have suggested that there are many factors that influence digital technology adoption and that while many older people may lead technology-enriched lives, some supposed digital natives have a limited view of using technology for learning. Of particular concern is the inertia that can arise if the technology is not easy to access, or its benefits are not immediately obvious.

Not surprisingly the vision here for adopting technology was largely to support existing practice and maintain the status quo rather than stimulate new practice. The feedback history tool was viewed as making existing processes more efficient. Drawing together material for a portfolio could be easily done digitally and the feedback history report was useful for some in generating an overview of a student’s work perhaps to make decisions about progression for struggling students. There was not much evidence of support for the original intention of the tool developers which was to enable students and staff to explicitly identify progress (or lack of progress) drawing on the now more visible past feedback as evidence, but greater awareness of a student’s learning journey might emerge more strongly with time.

Nevertheless, a reflective process that encourages students to revisit past feedback and reflect on changes they had made may be more immediately successful at promoting a longer term approach to learning gain. The very simple feedback response form was easy for students to use and may have helped them pay more attention to feedback and how to address it. The evidence suggested that portfolios produced after the introduction of the new assessment and feedback processes contained more discussion of past feedback and it may be a combination of the feedback response form and the ‘background noise’ about feedback in programme meetings and teaching sessions has resulted in an increased student response to feedback.

In this background noise there may be some concerns about making feedback more ‘public’. Both students and teaching staff may have concerns about feedback being available to others when in the past much feedback was only visible to the sender and recipient and maybe a couple of other assessors and examiners. Revisiting historical feedback by a wider team of people could produce staff development opportunities through comparing and discussing different approaches, but feedback samples might also be used for quality monitoring which might be viewed less favourably by academic staff. Similarly, students could see advantages of assessors having a sense of their learning trajectory, but also might be wary that past feedback, or even more so past grades, might influence a marker’s ability to judge a piece of work objectively according to current criteria.

Future Directions for E-Assessment and Feedback History Reporting

The feedback history tool was developed in the context of an institutional change initiative, which introduced mandatory electronic submission of both assignments and feedback at an institutional level. This was a necessary precondition for leveraging digital technology as an effective facilitator to gain a holistic overview of a student’s assessment and feedback journey, which as an idea fed into a nationwide concerted effort in the UK to articulate and provide guidance about the electronic management of the assessment lifecycle, including feedback (Gray et al. 2015). At the same time, the digitisation of the feedback management process contributes additional data about students that can be used to automatically capture and process more details about individual progress and performance with potential predictive analyses.

This field, learning analytics, has emerged as a key element of a wider trend towards ‘data-driven learning and assessment’, which Johnson et al. (2015, p. 12) identified as a mid-term driver for learning technology adoption in Higher Education for three to five years. But even though Johnson et al. recognise the potential of learning analytics, they warn that the field, while gaining traction, is still evolving and ‘solutions are elusive’ (p. 26). It is here where developments such as our tool can pave steps towards a better understanding of learning analytics and how they can provide practical and beneficial information to learners, teachers and administrators.

Additional benefits of the feedback history tool can be gained from better understanding of who needs what data in order to improve student feedback and the overall assessment process. The initial lessons learned from the project were fed into a follow-up project by UCL to develop a more flexible and user-friendly assessment and feedback dashboard which would provide different user groups (now including students) with different information according to their requirements. Even in our small scale pilot, we realised that requirements differ according to local preferences, assessment approaches and regulations, so an institution-wide solution needs to take into account different contexts and, for example, allow for adjustments of module teacher permissions between various parts of the institution.

Our feedback history tool assumes that feedback is already digitised and deposited into the VLE. It is also a relatively simplistic listing of data fields that are supposed to contain feedback. Our pilot used the formal submission points using two common VLE assignment activities: Turnitin and Moodle Assignments. The future UCL assessment and feedback dashboard will also list automated feedback from quizzes, but feedback, in particular informal feedback, often appears elsewhere, for example, as messages in free-flowing discussion forum threads, audio feedback or in private messages including email. Our tool does not capture such feedback, and it does not qualify or categorise feedback in any way. To address this and facilitate effective use of digital feedback, our vision is a tagging mechanism within the VLE that would allow a teacher – or student – to flag any item in the VLE as an instance of feedback so that it would be listed in the feedback history report which could later help users find and identify particular aspects of feedback in order to pick up learning gains and other improvements more effectively.

Managing such complexities is a challenge for the future developments of VLEs. Our experiences and new ideas informed a wider discussion supporting the assessment and feedback lifecycle with digital technology in the UK under the leadership of the Joint Information Systems Committee (JISC) (Gray et al. 2015). The IMS Global Learning Consortium picked up this discussion and is working towards technical definitions of assessment and feedback information to facilitate the exchange of relevant assessment information across VLEs and related learning technologies (Kraan 2015).

Conclusion: Technology Supported Change in Feedback Practice

It is difficult to draw any firm conclusions from early-stage and small-scale pilots such as this one. However, we have demonstrated the huge potential for using digital technology to support and encourage ipsative assessment processes. The ubiquitous VLE was selected in this case study to capture and present feedback over time and this could be from different sources such as peers and in different formats such as audio if the VLE will support these options. In other contexts different technologies might provide a similar feedback history report for students.

The key question is then not about the technology, but about the value of visible feedback histories. Our case study has suggested that the value might be different for different stakeholders and again this will be context-dependent. In our study academic staff requiring an overview of student progress, such as programme leaders making progression decisions for borderline students, or supervisors taking on new students, were particularly in favour of a feedback history. Students also could benefit from having feedback more accessible, and when combined with a process for enabling students to reflect on feedback – in this case the student feedback response form – there was evidence of enhancement of learning from feedback or at least greater awareness of past feedback. There was some concern from both staff and students about who has access to the feedback history reports and this will be something to be negotiated locally with the likelihood of different outcomes in different contexts. It seems that the increase in digital assessment opens up exciting possible futures of data analytics. Certainly we hope that learners will be beneficiaries as the accessibility of ‘feedback histories’ of individual learners facilitates an ipsative learning and assessment process.