Keywords

Learning Analytics

As I have argued elsewhere, Learning Analytics is a relatively new field of inquiry and its precise meaning is both contested and fluid (Ellis 2013).Footnote 1 It is again useful to draw on a definition of Learning Analytics that was offered by the first Learning Analytics and Knowledge (LAK) conference. Its call for papers defines Learning Analytics as:

the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs (LAK n.d.).

Ferguson (2012) nuances this further saying:

Implicit within this definition are the assumptions that Learning Analytics make use of pre-existing, machine-readable data, and that its techniques can be used to handle large sets of data that would not be practicable to deal with manually (Ferguson 2012 n.p.).

As Ferguson points out, Learning Analytics is synonymous with, incorporates, has grown out of and sits alongside a bewildering array of different terms and analytical approaches.Footnote 2 There have been several drivers that have motivated the development of Learning Analytics, including pressure from funding bodies (particularly government agencies but also fee-paying students and their parents) to achieve greater levels of transparency and accountability (Campbell and Oblinger 2007, p. 2). It has also been informed by a wide array of pedagogical and learning theories.Footnote 3 At the same time, as Ferguson points out, some of the early work in Learning Analytics was, as she puts it, ‘pedagogically neutral’ in that it was “not designed to support any specific approach to teaching and learning” (Ferguson 2012, n.p.).

Much of the research in the field of Learning Analytics is focussed on questions of improvement in terms of better informed (i.e. data-led) decision-making at the level of the institution (Bach 2010; Campbell and Oblinger 2007; Siemens et al. 2011). As Campbell and Oblinger put it: “In higher education many institutional decisions are too important to be based only on intuition, anecdote, or presumption; critical decisions require facts and the testing of possible solutions” (Campbell and Oblinger, 2007, p. 2). There is, however, increasing emphasis on expanding this data-led decision-making to tutors and students thereby offering a new emphasis on improving student learning.

At this point it is worth dwelling on what student learning actually is. After all, there are a wide variety of answers to the question “what does learning mean?” Theoretically at least, learning and Assessment Analytics is viably applicable to all of them. This chapter, however, works from a constructivist pedagogical perspective, informed by Biggs, that learning and education is “about conceptual change, not just the acquisition of information” and that this takes place when “it is clear to students (and teachers) what is ‘appropriate’, what the objectives are, where all can see where they are supposed to be going, and where these objectives are buried in the assessment tasks” (Biggs 1999, p. 60). In other words, this chapter works from the principle of constructive alignment whereby constructivism is, from a teaching perspective, “used as a framework to guide decision-making at all stages in instructional design: in deriving curriculum objectives in terms of performances that represent a suitably high cognitive level, in deciding teaching/learning activities judged to elicit those performances, and to assess and summatively report student performance” (Biggs 1996, p. 347). This chapter proposes that to be most effective, and to align with the growing emphasis on and enthusiasm for self-regulated and self-directed learning, Learning Analytics needs to attend to and place emphasis on the role that student-facing information might play in a constructivist educational paradigm.

Whether it is institution-, student- or tutor-facing, a significant proportion of Learning Analytics is preoccupied with predictive strategies based on identified patterns of behaviour and activity that indicate a higher likelihood of certain outcomes. As Ferguson points out, in its early incarnations, the impetus for a lot of the work in Learning Analytics came from a desire to improve student retention rates and as such, the dominant outcome upon which a great deal of this work has been and remains focussed is a reduction in student attrition through withdrawal or failure. For instance, the opening statement of Campbell and Oblinger’s report makes the assertion that student success is “commonly measured as degree completion” (Campbell and Oblinger 2007, abstract). This chapter proposes that student success should be understood as something more than this: as students having been inspired, challenged and stretched such that they emerge from the educational experience with skills, abilities and knowledge that they did not have prior to enrolment but also with a strong sense of self-awareness, alongside drive and commitment. Further, I contend that success should mean that they are also able to communicate this learning attainment to others in a way that is both compelling and supported with evidence. While some would argue that this evidence of learning attainment is implicit within a completed degree, I argue that being able to reflect on their learning achievement, synthesise it from atomised courses into a whole degree of achievement and to compose the specificity and distinctiveness of their achievement into a compelling and well-evidenced story is becoming increasingly important to university graduates in a highly competitive employment and postgraduate study market. It is here that the role of e-portfolios is becoming so crucial. Before I go on to consider the specific affordances of e-portfolio tools and the pedagogy that these tools make available to teachers and students, it is important to consider the limits of Learning Analytics and the challenges that it presents.

The Limits of Learning Analytics

Getting Learning Analytics established as ‘business as usual’ at scale has proven challenging. The reasons for this are widespread but one of the key issues is to do with the availability of data. On the one hand, a significant barrier to achieving successful operationalisation is the huge and growing volume of data that is potentially available for analysis. The 2011 Horizon Report, for instance, refers to “an explosion of data” (Johnson et al. 2011, p. 29) in the Higher Education sector, something Ferguson argues is an example of ‘big data’ (Ferguson 2012, n.p.; Maryika et al. n.d.). Ferguson asks the important question: “How can we extract value from these big sets of learning-related data?” (Ferguson 2012, n.p.). On the other hand, and completely counter-intuitively, another challenge and limitation of Learning Analytics is a paucity of data. As I have argued elsewhere, there are specific and significant gaps in the available data sets in the area of assessment and feedback (Ellis 2013). I have considered several reasons for this ‘gap’ in the available data, but I suggest that the most likely reason is

That the more finely granular level of data (such as student achievement against assessment criteria) has been, up to now, too difficult to collect and collate. This is a direct product of the continuing prevalence and persistence of paper-based marking systems that […] are difficult if not impossible to use for the purposes of Learning Analytics. […U]ntil relatively recently, the possibility of collecting and collating assessment data at a level of granularity that is meaningful and useful has simply been unthinkable. With the advent of useable, affordable and reliable electronic marking tools and the upsurge in interest across the sector to move towards Electronic Assessment Management, this is, arguably, about to change (Ellis 2013, p. 663).

As I will go on to discuss below, Assessment Analytics, as a subset of Learning Analytics, is an as yet untapped but potentially hugely significant area of future development, particularly as a way of developing student-facing analytics strategies.

The next issue that arises is what to do with the information, or data. Ferguson points out that while most online learning tools provide data on student behaviour, activity and interaction, what they offer to teachers or learners is often difficult to interpret and also difficult to put to use in a way that can have a beneficial impact on student learning (Ferguson 2012, n.p.). This returns us to the issues she identifies as ‘pedagogic neutrality’. While it is difficult to understand precisely what ‘neutrality’ might mean in this context, or whether pedagogical neutrality is even possible, the point Ferguson is making here is, perhaps, better understood as having limited or ill-defined usefulness. At least part of the problem relates to the tendency of Learning Analytics to measure things that teachers and students may not identify as being centrally significant to learning, such as interaction in online social learning networks.

The over-abundance of data in some areas alongside the paucity of it in others, accompanied by uncertain or unclear uses to which this data might be meaningfully put, offers an important reminder of some of the risks we face as we embark upon Learning Analytics strategies. One of the potential pitfalls of Learning Analytics is that it can be driven by the wrong motivating factors. Key amongst these is the risk of measuring the wrong things, measuring things that are not meaningful, measuring things simply because they are measurable and/or not measuring the right things. Arguably, when it comes to Assessment Analytics, it is most appropriate to work from first principles and for those principles to be pedagogical rather than statistical. As we approach the design of Learning Analytics strategies, it is worthwhile heeding George Siemens’s call to take “a holistic view of L[earning] A[nalytics] that includes […] practical issues, but also aspects related to the data, such as openness, accessibility, and ethics, as well as the particular pedagogical goals and usage context of the L[earning] A[nalytics] tools (e.g., detecting at-risk students, supporting self-awareness, or enhancing instructor awareness)” (Siemens cited in Martinez-Maldonaldo et al. 2015, p. 10). As Campbell and Oblinger point out, knowing why you are doing analytics is an important starting point (Campbell and Oblinger 2007). Martinez-Maldonado et al. have identified “the need for new design methodologies for L[earning] A[nalytics] tools, providing a pedagogical underpinning and considering the different actors (e.g., instructors and students), the dimensions of usability in learning contexts […] (individuals, groups of students, and the classroom), the learning goals, data sources, and the tasks to be accomplished” (Dillenbourg cited in Martinez-Maldonado et al. 2015, p. 11). Their LATUX workflow offers a useful set of questions to guide the early design process including the particularly pertinent “what are the (unexplored) possibilities?” (Martinez-Maldonaldo et al. 2015, p. 17). This question is an important reminder of the fact that Learning Analytics has the potential to allow us to know things and therefore do things that have previously been impossible or unthinkable. This growing body of work on the methodological aspects of Learning Analytics implores us to consider the factors that motivate what is measured, how it is measured, what patterns are identified, how it is acted upon, who acts upon it and when. Most importantly, it reminds us of the importance of ensuring that these considerations should be derived from pedagogy rather than simply by what data is available or obtainable.

It is also important to consider some reasons as to why Learning Analytics might not be undertaken in order to consider how best to mitigate against potentially negative or ‘backwash’ effects. While it is outside the scope of this chapter to consider these possible objections in detail, it is worth identifying them at this point. Prime amongst these is the issue of ethics for both students and tutors. The concern that some may have at being ‘surveilled’ through an analytics strategy may raise concerns about privacy and academic freedom and may raise the spectre of a ‘big brother’ institution. Mitigating these concerns with clear lines of consent and strategic purposes (to improve student learning rather than to ‘police’ poor teaching) will be important. Another concern may be that the aggregation of information for students is an instance of infantilising or ‘spoon feeding’ them. Ensuring that analytics automate, make easier, more convenient or more obvious things that they are offered anyway and, as Campbell and Oblinger argue, are designed to “steer students toward self-sufficiency” are important (Campbell and Oblinger 2007, p. 10). Finally, concerns that a Learning Analytics strategy might have a ‘flattening’ effect by leading the pedagogy (rather than responding to or supporting it) are significant. Amongst these concerns, in the area of Assessment Analytics we can usefully include concerns focused on grade integrity and the use of assessment criteria and rubrics to evaluate student work (Sadler 2007, 2009a, b, 2010b). It is also important to consider concerns about the potential impact this might have on knowledge acquisition and accumulation (Avis 2000; Clegg 2011; Maton 2009).Footnote 4 Arguably it is only worth pursuing a Learning Analytics strategy if, and only if, we can mitigate against these concerns.

Finally, it is important to remember that for any analytics strategy to be useful, and therefore effective, it is ultimately not the data in and of itself that matters. That is because just providing data and an analysis of it does not, ultimately, accomplish anything. It is the set of actions that happen because of, informed by and based on the analysis that has the impact. This brings to mind the work of David Boud who has argued for the importance of closing the feedback loop in assessment. He suggests that what we tend to think of as feedback on assessment only becomes feedback when a student acts upon it. He and Elizabeth Molloy draw on Sadler’s pithy observation that without that subsequent action, feedback is only ‘dangling data’ (Sadler quoted in Boud and Molloy 2013 loc 434). While I will return to this later, the same is true of student-facing Learning Analytics. Ultimately, analytics without interventions or actions is a fundamentally pointless activity. So, for Assessment Analytics to be effective, students need the guidance, support and motivation to engage with, interpret and act on what it is telling them. It is here that we can begin to see the important role that e-portfolios can play in a Learning Analytics strategy because in order to deal with the information that has been made available to them, students need a space in which to do that.

E-Portfolios

Like Learning Analytics, the term ‘e-portfolio’ is contested and it is not easy to get a good, stable working definition of what it means. As Hughes suggests, the discussion about e-portfolios has often been dominated by the “tools used rather than the transformations in learning and teaching that such a domain and conceptual shift might support” (Hughes 2008, p. 437). The definition Hughes prefers comes from the Centre for Recording Achievement and proposes that an e-portfolio is, or might be: “a repository, a means of presenting oneself and ones skills, qualities and achievements, a guidance tool, a means of sharing and collaborating and a means of encouraging a sense of personal identity” (CRA quoted in Hughes 2008). Another, pithier, definition that she points us towards, which I commend for both its efficacy and efficiency, is taken from La Guardia Community College who identify their e-portfolio as a place to “collect, select, reflect and connect” (Hughes 2008, p. 439). It is around these four key purposes of the e-portfolio that I will later turn to further explore how Learning Analytics and e-portfolios can, and arguably, should connect and work together to bring about important transformations in student learning and our educational model as a whole.

Assessment Analytics within a Learning Analytics Strategy

In terms of developing student-facing Learning Analytics strategies, assessment seems an obvious place to start. As I have argued elsewhere, the key value in including assessment data in a Learning Analytics strategy is because, as far as students are concerned, assessment is very meaningful. That is largely because it provides students with tangible evidence of their learning attainment and progress. For students, assessment results are the return on their investment of both time and money (see Taras 2001). As the SOLAR concept paper puts it: Learning Analytics can “contribute to learner motivation by providing detailed information about her performance” (Siemens et al. 2011, p. 6). Finding ways to get more value out of students’ investment is well worth pursuing.

What form the actual data takes and the way that it is generated and later harvested or mined for the purposes of analytics is wide and varied. As outlined above, it is really only with the advent of eMarking and Electronic Assessment Management (EAM) tools, such as Grademark available within the Turnitin tool developed by iParadigms, that it has become feasible to collect these data sets. Of course, e-portfolio tools themselves now incorporate assessment tools, such as the Gateways available within PebblePad. It is fair to say, however, that the assessment tools within e-portfolios are fairly unsophisticated in comparison to specialist EAM tools. Most e-portfolio tools also integrate with Learning Management Systems (LMS), such as Blackboard and Moodle, which have well established EAM tools such as rubrics and grade management functions. The data sets for assessment analytics can therefore be generated inside, outside and through e-portfolio tools. Whether they are generated inside or outside of e-portfolios, these data could usefully include the frequency of common comments that are made by tutors on student work as part of the marking and feedback process. These comments often relate to common errors or areas of weakness, or they relate to areas of improvement and strength. When appended to student work as annotations, these comments can serve the dual purpose of providing useful information to students on the strengths and weaknesses of that piece of work, while also laying down a data trail that can be available for later analysis. Within Grademark, the comments, which are known as Quickmarks, can be customised to suit a particular assessment task, a specific set of learning outcomes, or even to suit the aims of an individual teacher or a group of teachers.Footnote 5 It is therefore possible to create a set of Quickmarks that cover a range of achievement levels or scenarios (e.g. a set that identifies if work is not meeting, approaching, meeting or exceeding a particular competency) that tutors can use to evaluate particular characteristics of students’ work. If these are used systematically it is possible to gather a rich picture of such things as the competencies students are struggling with the most. In a similar way, the selection of ‘cells’ in a marking rubric, such as that available within Grademark and also with other tools such as ReView and most LMSs, provide ways to record and gather this information at the same time as communicating it to students. Many of these tools also allow for the collection of peer and self-evaluation data that can be compared to each other and to tutor evaluation decisions. Even apparently incidental information, such as the date and time of assessment submission, could be incorporated into an Assessment Analytics strategy.

There is a wide variety of different ways that assessment data can be useful as part of such a strategy. It can be aggregated and then ‘cut’ in many different ways so that it looks across a cohort as well as between them, while also being able to focus into the level of an individual student within and across courses and levels, as well as across time. At the individual level, this can include such things as providing students with information about where their result places them in the cohort (in terms of final results, achievement against specified learning outcomes and even in the frequency of common problems). This may have the potential to motivate students to improve and aspire to higher levels of achievement. Evidence of common errors and cohort-wide weaknesses may also turn students’ attention to areas they have previously neglected or considered to be unimportant or insignificant. By comparing self- and peer-evaluation data to tutor evaluations, it is possible to identify the development of self-evaluation skills as well how well assessment criteria are understood by individual students and by the cohort as a whole. Pre-submission feedback that is informed by evidence from the strengths and weaknesses of previous student cohorts in response to a specific assessment task, can guide students when they approach that same assessment task (see Boud and Molloy 2013, loc 500). Post-submission feedback may be effective in motivating students to engage with their feedback, take steps to understand it and to act upon it. These data sets can become artefacts in themselves that can be imported into e-portfolio tools. In the near future we will almost certainly see dashboard tools being built that aggregate these data sets into user interfaces (e.g. that are student and tutor facing) that could be incorporated within e-portfolio tools.

While the principles of assessment feedback are to give students an indication of their learning achievement at a particular point in time, this information is almost always provided in isolation. One of the key benefits of providing students with Assessment Analytics data as part of a larger Learning Analytics strategy is that it might help both students and teachers join these isolated pieces of information together to see a bigger picture. This bigger picture has the potential to help students and teachers appraise student performance across all their assessment tasks in all of their courses, and even across degrees if they are undertaking a dual degree program, and then locate this performance against a set of standards, learning outcomes and assessment criteria. But it also allows students to locate their current performance against their previous performance—their former self—and against concurrent performance in different contexts—their other selves. It could also allow students and teachers to get a sense of students’ performance relative to their peers and also to track and trace their performance against their own self-evaluation and their personal goals—their future self. These kinds of comparisons are, for the most part, unexplored in the way that Higher Education courses and assessment are currently designed, delivered and administered. This approach is aimed at providing a more holistic view of student performance and attainment.

By providing this more holistic ‘bigger picture’ view, Assessment Analytics coupled with an e-portfolio have the potential to mitigate one of the biggest challenges that we currently face across the Higher Education sector. This challenge comes from the fact that our undergraduate degrees tend to be structured in ways that are deeply and perhaps dangerously atomised. By this I mean that while a student, as Geoff Scott puts it, comes to university to study a particular degree with a particular name and a particular purpose (Scott 2015, pers. comm.), we have a tendency to break these degrees up into smaller ‘chunks.’ These ‘chunks’ can be ‘streams’ (such as specialisations or majors and minors that are common in American and Australian degrees) and/or levels (such as the foundation, intermediate and honours years that typify British undergraduate degrees). In almost all university degrees, these are further broken down into individual subjects, courses or modules. Within these subjects learning is further ‘chunked’ into topics, which often coincide with weekly timetabled class sessions and/or assessment tasks. Piecing all of these ‘chunks’ back together to make the whole can be challenging for both academic staff and students. As the principles of constructive alignment and curriculum coherence are encouraging more HEIs to introduce well aligned, outcomes-based education, more degrees are being ‘mapped’ such that both students and their teachers can get a clear sense of how all of these atomised ‘chunks’ fit together to constitute the whole. Where sets of program or degree learning outcomes and statements of graduate attributes or capabilities have often been constructed as aspirational or desirable, increasingly institutions are not only outlining what they intend their graduates in and across their degrees to achieve, but they are also setting out to assure stakeholders (students, their parents and the graduate employment market) that graduates have indeed achieved them. Such assurance-of-learning requirements are now quite standard in accredited degrees (such as qualifications in medicine). It is no coincidence that it is these discipline areas that lead the way in the use of portfolios, and latterly e-portfolios, for students as a means of managing, tracking and providing evidence of their learning achievement (see Van Tartwijk and Driessen 2009).

This more holistic approach that Learning Analytics can make available to students and their teachers aligns usefully with what Sadler argues we should be aspiring to in our approach to assessment and feedback; he refers to it as a “full-bodied concept of quality” (Sadler 2010a, 548). Providing students with support and guidance at this holistic, full-bodied level, in a joined-up way, is arguably becoming something an increasing proportion of students expect, if not feel entitled to, as part of their higher education experience. After all, they are becoming very accustomed to this kind of holistic view of their lives and behaviour, supported by data analysis, in many other aspects of their lives; this is most obvious in their experiences of social media, but it is also becoming a common place in other areas such as shopping, finance, and exercise and fitness. The fact that their higher education providers are not able to provide them with this bigger picture of their own behaviour is almost certainly making what we do seem increasingly out dated and unsatisfactory.

Many institutions are now turning to use Assessment Analytics as a way of making their assurance-of-learning strategies both more efficient and reliable. For the time being at least, the dominant means by which students demonstrate their learning attainment in a way that can then provide assurance that learning outcomes have been met, is through their performance in assessment tasks. Using EAM to record the professional judgements of academic staff regarding student performance in assessment tasks against standards-based assessment criteria makes the harvesting of that data, even across large, team-taught and/or geographically dispersed student cohorts, relatively quick, cheap and easy. Again, it is possible to ‘cut’ the data in different ways to not just ascertain which individuals have met, not met or even only partly met specific learning outcomes. It is also possible to ascertain which learning outcomes have been most (or least) frequently met across the cohort. This means that proactive steps can be taken by teachers to provide targeted (bespoke) educative just-in-time interventions at both the individual and cohort level to address areas of weakness at the point of need.

The Role of E-Portfolios in a Learning Analytics Strategy

While this ability to assure learning achievement is useful from an institutional point of view, particularly when needing to report to professional, statutory and regulatory bodies for accreditation purposes, it stands to reason that pairing student-facing Assessment Analytics data (in the form of a report or even a dashboard) with an e-portfolio makes it possible for students to begin to take responsibility for assuring their own learning. I would go as far as to say that in order to provide this holistic, ‘bigger picture’, it is essential that Assessment Analytics be coupled with an e-portfolio. It is here that I return to the La Guardia definition of the key affordance of the e-portfolio in teaching and learning: collect, select, reflect and connect. First is the role of the e-portfolio as a place for students to collect evidence of their learning achievement. Providing students with a curriculum map, which can be used to measure their progress through a program of study by measuring their learning attainment against a set of learning outcomes, moves Learning Analytics into the realm of self-regulated learning. This approach helps students and teachers move beyond what Sadler refers to as the “one-way telling” that characterises so much of what is understood as ‘feedback’ (Sadler 2015, p. 16). It also sets students “on the path to more informed self-monitoring and […] connoisseurship’ (Sadler 2015, p. 18). It also resonates once again with Boud’s important and influential work on assessment and feedback where he advocates for “closing the feedback loop” and to seek “evidence of effects” with “both teachers and students seeing the outcome of feedback on improved performance in subsequent tasks” (Boud and Molloy 2013, loc 154).

It is at this point that the usefulness of the ‘reflect’ and ‘select’ elements of e-portfolios becomes apparent. In the first instance, as suggested earlier, a fundamental truth of any analytics strategy is that data in itself is useless unless a set of actions happen as a result of and based on it. In other words, for analytics to be effective, the data needs to have somewhere to go and to be dealt with further. Prompting and rewarding students for reflecting on what their data are showing them is the first step. As Van Tartwijk and Driessen put it: “a portfolio can […] stimulate reflection, because collecting and selecting work samples, evaluations and other types of materials that are illustrative of the work done, compels learners to look back on what they have done and analyse what they have and have not yet accomplished” (Van Tartwijk and Driessen 2009, p. 791). A Learning Analytics strategy that aggregates data from a large number of diverse assessment activities into an e-portfolio gives students a space in which to consider their performance and improvement across both tasks and time. This allows them to measure their own competence development and, more importantly, to set detailed and realistic goals towards which they can work in the future. It is important to consider, however, that while e-portfolios can provide students with a space in which to develop their reflective skills, and particularly their reflective writing skills, e-portfolios in and of themselves cannot teach students these skills. Offering students dedicated and targeted guidance and support on the development of their reflective capacities is an important component of any reflective learning strategy (see Buckley et al. 2009; Moon 2007).

Second, another way that students can use e-portfolios to take responsibility for assuring their own learning achievement is to use them to select which pieces of work best demonstrate their achievements. This role of e-portfolios sets up an inevitable tension between their purpose for reflection, on the one hand, and assessment on the other. As Van Tartwijk and Driessen put it, “an argument against this dual function is that […] Learners may be reluctant to expose their less successful efforts at specific tasks and to reflect on strategies for addressing weaknesses if they believe they are at risk of having ‘failures’ turned against them in an assessment situation. Portfolios that are not assessed, on the other hand, do not ‘reward’ learners for the time and energy they invest in them” (Van Tartwijk and Driessen 2009, p. 793). This is an important reminder to find ways to structure e-portfolios and their use such that they can achieve these multiple functions and purposes. The idea of students self-selecting work that they feel best demonstrates their achievement of learning outcomes and competencies is, as Hughes argues, one of the ways that e-portfolios are playing such an important role in the destabilising of traditional notions of teaching and learning (Hughes 2008). This movement from tutor-assured to student-assured learning is part of an important sector-wide shift towards a more participatory and collaborative pedagogical approach.

The final component that makes e-portfolios so useful and valuable is their ability to help students ‘connect’. These connections can be established both during a student’s program of study but also, importantly, beyond graduation as they enter the world of work. Integrating e-portfolios into a social learning context could allow students to develop and harness folksonomies whereby such things as the attitudes and behaviours of high-achieving students are visible to and shared with everyone, thus guiding and motivating their behaviour. Gamification (whereby students are ‘rewarded’ for achieving against markers which are known to be attendant to student success such as making regular use of the library) may also have some use. In these contexts, Learning Analytics could operate as a kind of nudge analytics: by making plain which pathways, behaviours and strategies are most likely to result in success. Using the affordances of e-portfolios to facilitate students making and maintaining connections might be useful in facilitating these interactions.

Beyond graduation, by making possible the collection and selection of a series of artefacts, combined with a space in which to undertake effective reflection on how these artefacts constitute evidence of learning achievement, e-portfolios enable and empower students to first curate evidence of their successful learning journey and then to compose distinctive and compelling stories of themselves that they can tell to the graduate employment market. At this point it is useful to clarify what I mean by the term ‘graduate employment market’ because, of course, an increasing proportion of graduates from HEIs will never be, or even aspire to be, employees. So, when I speak of the graduate employment market, I take this to mean all aspects of entrepreneurship including venture capital, seed funding, crowd sourcing and partnership so on. It also acknowledges that some students pursue higher education qualifications for the reward of learning alone and have no attendant career aspirations. Their requirements for composing a ‘story’ of their learning journey and achievement is just as important and legitimate. One of the affordances of e-portfolios that makes them so compelling and ultimately so valuable to higher education is their persistence and their availability and accessibility. In terms of persistence, e-portfolios as a means of showcasing student work and achievement, remain available to students even after graduation. Because they are online, they are widely shareable, being available and accessible to anyone, anytime, anywhere. As such, they can provide a flexible and attractive ‘shop window’ through which graduates can display the ‘wares’ of their learning achievement, alongside their distinctive qualities and capabilities.

Conclusion

It is clear that Academic and Learning Analytics offer an exciting and powerful new strategic direction in Higher Education. It is vital, however, that Learning Analytics embrace student- and tutor-facing strategies. In order to do this, it stands to reason that data from assessment and feedback—Assessment Analytics—is central. It is important that the design principles for an Assessment Analytics strategy should be informed by the pedagogical theory of assessment and feedback. This should concentrate on retaining the fundamental principles of assessment but also, and perhaps more importantly, should encourage a move away from guess work, anecdote and speculation towards providing informed answers to questions relating to student attainment and achievement. It should also deliberately work towards outcomes like students’ self-regulated learning and students assuring their own learning in order to facilitate the development of collaborative and participatory pedagogies that are so important to the future relevance and therefore value of higher education. As a tool to help students collect, select, reflect and connect, e-portfolios play a vital role in supporting and facilitating these endeavours. What remains, now, is to begin the practical work of piloting and evaluating these strategies to establish which are both practicable and effective in achieving the outcomes envisaged here. This is an exciting area for future research and development.