Keywords

Using E-Portfolios for Assessment: An Overview

Learning portfolios as an assessment tool is not a new invention by any stretch of imagination. In fact one of the most enduring perspectives on learning portfolios is from the nineties and defines portfolios as “a purposeful collection of student work that exhibits the student’s efforts, progress, and achievements in one or more areas. The collection must include student participation in selecting contents, the criteria for selection, the criteria for judging merit, and evidence of student self-reflection” (Paulson et al. 1991: 60). Since then portfolios or their digitized versions, e-portfolios, have been defined as (digitized) collections of artefacts (Lorenzo and Ittelson 2005), repositories of (student) work (Shroff et al. 2013) or even as digital containers capable of storing visual and auditory content (Abrami and Barrett 2005: 1).

The shift, however, from the paper versions to the digitized versions has been evident since the early years of the twenty-first century and have entailed a variety of affordances such as affordability and ubiquity to name just two and is parallel to the shift towards e-learning in general (Light et al. 2012, ix–x). Zubizarreta points out that despite the history of portfolios in certain disciplines, the portfolio approach to gauging student accomplishments and growth in learning—while not entirely new in higher education—has historically received more attention in the K-12 [schools] arena (2009: 4). All authors agree, however, that using portfolios for assessment is gaining momentum in the higher education sector. And this trend is not only restricted to the West but also includes Asian countries such as Singapore, Japan and most importantly for us Hong Kong (Zubizaretta 2009: 4).

The trend of using e-portfolios in higher education institutions in Hong Kong is closely related to the concept of Outcomes-Based Teaching and Learning which has been adopted in Hong Kong since 2010 onwards. This approach is to enable “evaluation and improving quality”, (and) “gathering credible evidence for assessing student learning” (University Grants Committee [UGC] 2011).

Since then institutions have worked on their curricula to achieve constructive alignment (Biggs and Tang 2007) between Intended Learning Outcomes, Teaching and Learning Activities and Assessment. Through curriculum planning, the Intended Learning Outcomes of individual courses (CILOS) have been mapped to the matching Programme Intended Learning Outcomes (PILOS), which in turn have been mapped to the Graduate Attributes (GAs) of the institution. The next stage is to ensure that the PILOs and Graduate Attributes are being achieved at an institutional level. This is commonly referred to as outcomes assessment (OA). At the Hong Kong Baptist University, OA has been conceptualised and piloted under the ECI or the Evidence Collection Initiative for outcomes assessment and has 6 testing components distributed over three levels which are by and large quantitative methods using external tests with some elements of course-embedded assessments:

  • Course Level: CEA, FRE

  • Programme Level: Aggregated CEA, LEI-Programme

  • University Level: University Academic Test, LEI-P/Personal and Social Responsibility Inventory

(Hong Kong Baptist University, Centre for Holistic Teaching and Learning Evidence Collection Initiative—Report for AY2012–2013 and Plan for AY2013–2014).

Underlying this concept of OA is the assumption that since constructive alignment clearly defines and assesses outcomes, OA and especially those using course-embedded assessments are a good indicator of student learning (Hernon and Schwartz 2006). It has however been argued that “while the concept of constructive alignment can facilitate instructional planning at the course level to focus on learning outcomes, it may not be able to facilitate the integration of broader sets of outcomes that may be required at institutional or society levels” (Kennedy 2011: 212). For this, an “integrated approach” is proposed in which ‘competencies are relational, involve reflective practice and place importance on context’ (ibid). Kennedy (ibid: 213) argues that “it follows from such an approach that assessment will be very challenging since its focus will be on the attainment of complex outcomes and the extent to which they have been achieved. Yet this should not be a deterrent from considering such an approach since it can lead to the development of meaningful, relevant and representative outcomes required by institutions and the community”. Pelliccione and Dixon (2008: 750) argue further that “quality is a difficult concept to define given the use of a traditional assessment framework and it cannot be simply reduced to a set of easily quantified learning outcomes. Students learn in different ways and assessment which supports learning needs to be flexible and take into account the needs of individuals in order for them to make sense of feedback in the context of their own environment”.

The e-portfolio as an assessment tool lends itself very well to this idea of a flexible model of assessment. The outcomes-based approach to teaching, learning and assessment which tertiary institutions in Hong Kong have embraced emphasises learner-centred practices to help achieve higher level outcomes such as evaluation, reflection and inquiry. Student e-portfolios support learners to take an active role in achieving these higher level learning outcomes by giving them ownership of their own learning (Cambridge 2010: 25). In terms of assessment, e-portfolios support criterion-referenced as well as formative assessments. Cambridge (2010: 25) points out that “in giving students a place to reflect on their experiences through the artefacts of those experiences and the ability to creatively express their understanding of who they are and what they have accomplished, e-portfolios take into account the importance of authenticity to deep learning”. E-portfolios not only provide students the avenue to demonstrate their accomplishments but also their information communication technologies (ICT) capacities. Their ICT abilities can be illustrated through selected and self-made images, multimedia, blog entries and hyperlinks related to their overall learning experiences. Furthermore, these artefacts should also include student’s reflections on their learning and experiences as well as course lecturers, tutors and peers’ comments on student’s submissions.

E-portfolios are also powerful tools for self-directed evaluation and assessment. For e.g. Johnson et al. (2010: 9) observe that “the development of a portfolio encourages learners to shift from playing a passive role in assessment and evaluation—in which they are pressed to focus on external issues, such as what questions the instructors are going to ask and what they should be studying—to an active role, in which they must engage in more complex thinking and self evaluation in choosing representations of what they learned. This route thus requires students to reflect on and demonstrate their competencies with real world artefacts”. Shroff et al. (2013: 144) have summarized the research to find that “the e-portfolio can also be a powerful tool to (1) promote learning (including learning from the process of assembling the portfolio); (2) improve critical thinking and content areas; (3) record accomplishments in an educational context held by the students for their own use; (4) assess long term, ongoing, authentic evaluation, and self-evaluation and self-reflection, and (5) provide evidence of continuous development”. In their own research on implementation of e-portfolios for outcome-based assessment Pellicione and Dixon (2008: 759) find that:

Throughout this research it has become clear that there are several advantages to implementing an ongoing and comprehensive approach to the development of e-portfolios in undergraduate education programs. Not only do they encourage the explicit alignment of organisational generic student outcomes with those of individual programs but it appears that student engagement with this form of selecting, describing, analysing and appraising each chosen artefact empowers students to become the drivers of their own development.

But this is easier said than done. The usual affordances of lifelong learning, personal and professional development, developing reflective practice, etc., associated with e-portfolio integration are valid in the long-term institutional context, but vague in the short term and for the purposes of assessment within a semester or course. This is an issue which affects not only teachers but also students who are required to create an e-portfolio and features prominently in the case study analyses included in this volume. It is a significant factor in accepting or rejecting e-portfolios as a valid teaching and learning exercise, as was shown in the study by Shroff et al. who have described an Attitude Towards Learning (ATL) using E-Portfolios (2013: 143). In fact Ayala (2006: 13) goes as far as claiming that “the ones most hurt by this [e-portfolios as a top-down institutional mandate and without considering the students’ needs] would hurt those students the most who created electronic portfolios in response to campus or course requirements established without adequate regard to their effectiveness in higher education”. Based on their own empirical research on implementation of e-portfolios in institutions of higher education in Hong Kong, Deneen and Brown (2014: 1) point out that faculties, programmes and universities may depend more on enthusiasm rather than on critical research when it comes to e-portfolio management and adoption.

REFLECT: A Community of Practice on Student E-Portfolios

Enthusiasm did play a big role even at the Hong Kong Baptist University when in May 2014 a Community of Practice (CoP) was set up to exchange ideas on how student e-portfolios could become a tool for assessment and for lifelong learning and provide evidence of student achievement of the HKBU Graduate Attributes (Fig. 1.1). The CoP included 12 like-minded colleagues from multiple disciplines and learning centres at the University. They were united by either their experience of working with e-portfolios as assessment tools or their desire to introduce new forms of assessment. The e-portfolios would reflect learning in their respective courses as well as support both formative and summative modes of assessment. (Chaudhuri and Chan 2016: 1).

Fig. 1.1
figure 1

The HKBU graduate attributes for undergraduate courses

Although the CoP was set up based on the enthusiasm of colleagues interested in testing e-portfolios in their respective disciplines, its agenda was intended to address some of the issues associated with implementing e-portfolios in university courses as being issues generally associated with integrating technology in higher education. First and foremost the CoP wanted to address the issue that e-portfolios are generally restricted to specific disciplines, where collecting artefacts and reflecting on them to showcase professional and or academic development seems to be an obvious choice. Traditionally some of these disciplines have been Education (pre-service teacher-students), Language (writing courses) and of course Visual Arts. The CoP on the other hand set out to involve colleagues from disciplines where e-portfolios were not the obvious choice for assessment. Disciplines represented in the CoP were History, Mathematics, Business Communication, Physical Education, European Studies and Education Studies, and members included Professors, Assistant and Associate Professors, Lecturers, Learning Officers, Librarians and General Education officers. This eclectic group of members included in the CoP ensured that the discussion on e-portfolios within the campus was multidisciplinary, i.e. additive in nature and was not restricted to certain niche areas. Nor was it a discussion which did not take into account the unique needs of individual academic disciplines. But concentrated on creating a template fit for all which usually brings on the danger that “portfolios are done unto students, rather than being done by them” (Ayala 2006: 13). In other words, the CoP answered the question, why e-portfolios, from a course or discipline perspective rather than from an institutional perspective. It used a more bottom-up approach and contributed to a more democratic model of e-portfolio integration.

Last but not the least, the CoP also paid particular attention to the choice of technology while implementing e-portfolios. Similar to the issue of purpose while introducing e-portfolios, the choice of technology and its implementation plays a major role in students and teachers accepting or rejecting e-portfolios (Shroff et al. 2011). Here also the CoP took an inductive approach to the issue where members were free to choose the technology, which they would use as a platform for their course-level e-portfolios and bring back their and their students’ voices to the discussion table of the CoP.

The following sections re-examine the discussion on the above issues within the CoP and the conclusions reached. The sections take the form of questions and answers considered relevant by the CoP on student e-portfolios and which could lead up to an e-portfolio initiative at the course and or programme level at higher education institutions. The chapters in the second section of this volume are not only case studies illustrating the discussion in this chapter but are also carriers of students’ and teachers’ voices as reflected in their data.

Five Questions for Effective E-Porfolio Practice

Question 1: Why Use E-Portfolios for Your Course?

Any discussion on e-portfolios with a bottom-up approach needs to start with the question of purpose (Barrett 2007). Members of the CoP were asked in one of the very first meetings what to their mind was the primary purpose of introducing e-portfolios to their courses. This is a very different discussion to the one which is found in the literature on the affordances of e-portfolios in general. A good overview of these is provided by Shroff et al. (2013), or a more comprehensive one by Cambridge (2010), and I will not review these here. Individual authors in this volume have referred to the relevant studies in their own fields, which are more useful to the purposes of this volume. The discussion is different because the practitioners were asked to reflect on whether an e-portfolio as a tool (for assessment, reflection, repository or showcase) at all fits the discipline that they were representing. As a previous exercise, the members had already made themselves familiar with the general affordances associated with e-portfolios and were now ready to adapt that discussion to their own practice. In many ways members had to start from scratch as experiences from classical e-portfolio disciplines such as Education or Language could not directly be put to use for disciplines like History or Mathematics. Moreover, they had to consider the value-addition of the e-portfolio exercise both from their own as well as their students’ perspectives in order to fulfil the following task:

  • Please complete the following statements:

  • An e-portfolio would help my students to…

  • An e-portfolio (in my course) would help me to…

Task 1: Identifying Roles for the E-Portfolio

As expected, courses from diverse disciplines also had diverse expectations of what role an e-portfolio would fulfil in that course, taking into account the existing syllabi, outcomes and assessment schemes in place. These roles ranged from showcasing particular skills such as creativity in a foreign language (Chui and Dias in this volume) to scaffolding a major assessment task such as a term paper by collecting and reflecting on artefacts throughout the semester (To and Ladds in this volume). Courses within disciplines such as Physical Education (Cheung et al. in this volume) or Education (Sivan in this volume) looked at e-portfolios as a reflection and showcase tool for out-of-class learning, whereas in a course on Business Communication it was thought best to integrate the e-portfolio into the day-to-day classroom activities and make it into a sharing platform for collaborative learning (Linger in this volume). On a more macro level, General Education (GE) portfolios were thought to be best open-ended and to serve to showcase the GE experience at HKBU (Hodgson in this volume), whereas final year European Studies students were encouraged to develop a portfolio of skills they thought were most suited to the job market that they were about to enter (Cabau in this volume).

Question 2: Where Should You Start?

Once the role of the e-portfolio at the course level seems to have been defined, a good starting point for the e-portfolio implementation would be to identify specific outcomes for the final product. The CoP being an institutionally funded group with the mandate of identifying the scope of multidisciplinary e-portfolios for the entire institution, it was also essential to find a common denominating factor for all courses of the university and use this as the starting point. A particularly useful set of criteria was found to be the 7 Graduate Attributes (GA) defined by the HKBU.

At first sight these GA are little more than an abstract set of core competencies expected from graduating students representing the university in the job market. Nevertheless core competencies in higher education have been a topic of discussion for quite some time now (Lozano et al. 2012) and are generally read as the antithesis to subject-oriented skill sets; as Gnanam (2000: 148) calls them, they are “subject-neutral” skills. So core competencies are by nature transdisciplinary and speak to a much broader target audience than a particular subject. Yet keeping with the principles of outcomes-based education, these core competencies or in our case the GAs are mapped to each individual course being taught at the university. This fact makes the GA a particularly useful instrument while designing an e-portfolio even at the course level. The CoP sought to capitalise on this fact and the members were asked to identify at least two GAs from the above list, which had been mapped to their individual courses and which they would like to assess based on the e-portfolio they prescribed for their students.

  • Choose a partner from around the room with whom you would like to brainstorm. Try to choose a discipline which is far from your own. The idea is to learn from each other and also to identify common factors of e-portfolios across disciplines.

  • You have a hand-out with the GAs on it. Choose at least two which you think you can use as a starting point for your e-portfolio concept.

  • Ask yourself which course/programme outcome(s) can be mapped to each GA.

  • Explain to your partner(s) why you chose each GA and brainstorm what sort of Artefacts/Student-work you would like to see under this ‘Category’.

Task 2: Mapping the GA to the Outcomes of the E-Portfolio

In this particular group of CoP members it was noticed that Knowledge (particularly cultural and general knowledge), Skills (especially information literacy and IT) and Creativity (including critical inquiry) emerged as some of the common GA which members wanted to see reflected through e-portfolios in the courses irrespective of the discipline. This had something to do both with the understanding of e-portfolios as showcases of student work as well as the difficulty of assessing attributes such as creativity or information literacy through conventional assessment methods. The point to note here is that these attributes were considered important by practitioners of a diverse set of disciplines and found to be relevant to their disciplines.

Question 3: How Is the E-Portfolio Going to Be Structured?

Once the questions have been answered as to what role an e-portfolio should play within the course, its assessment design, and what outcomes the e-portfolio should be assessing, the next logical question to discuss would be the structure and the look and feel of the e-portfolio. The broad question regarding the structure of the portfolio can be further broken down into three main component parts as was evident in the deliberations of the CoP, namely: The nature of the artefacts included in the e-portfolio, the number of such artefacts that should be included so that a clear development of the attribute to be assessed emerges and so that the e-portfolio effectively fulfils its role and last but not the least the question of how the final product is organised and how it should look. An easy answer to these questions is that it depends on the course and its outcomes as well as on the person teaching that course. This is true on the surface. On the other hand, for practitioners just starting out with the idea of e-portfolios it is of vital importance that a set of criteria be provided which act as guidelines for them to develop their own ideas further (see also Pegrum and Oakley in this volume). From the students’ perspective it is equally vital that they receive a succinct set of directions to be able to collect, select and present the artefacts to make their e-portfolio most effective for their target audience (Ellis in this volume). The answers presented below therefore do not lay claim to being exhaustive or representative but are the result of the CoP discussions mentioned above and are based on the experiences of 12 different practitioners, the details of which can be found in the chapters of this volume. They contribute to the criterion-based model developed by the CoP and then tested in individual courses.

Question 3a: What Kind of Artefacts Can Be Included?

Broadly e-portfolios would allow for two types of artefacts, namely text-based artefacts, which could include reflective texts, journals, blogs or research logs among others; and multimedia artefacts such as videos, collages, vlogs, etc. The assignments could be course-embedded, i.e. they come from the instructors as part of their teaching or could be specific portfolio assignments.

Systematically one can map these artefacts to specific outcomes of e-portfolios and classify them accordingly. The following table was the result of such a discussion within the CoP, where members were asked to add to the table with more ideas on what kind of artefacts could be linked to the outcomes listed on the left.Footnote 1

Outcome

Examples

Critical inquiry (assignment: small-scale research task)

Journal entries, (video) blogs, bibliographies, evidences of critical use of the Internet

Creativity (assignment: solve a problem)

Case studies, assignments, creating an original piece of work such as a literary text or a multimedia artefact

Citizenship (assignment: discipline-oriented community service)

Multimedia and or reflective essay as evidence of extra-curricular engagement (political/social/creative)

Information literacy

Research log, research assignments, bibliography, use of the Internet

Task 3: Giving Examples for the Nature of Artefacts for the Outcomes

The above table suggests that assignments set within the course are also legitimate artefacts which can be re-used for the purposes of an e-portfolio. Such assignments can be tagged to particular outcomes and pointed out to the students or identified by themselves as artefacts which they can use in their e-portfolios. During the course of the semester a repository is then gradually built up for a particular outcome, out of which the student can select his or her best work. But artefacts can be selected independently of course assignments where the e-portfolio and its contents are an assignment by themselves. These artefacts may showcase independent and autonomous learning (Chui and Dias in this volume) and even encourage the kind of inquiry-based learning which lies at the heart of many of the core competencies set out by higher education institutions for the twenty-first century.

Question 3b: How Many Artefacts Should Be Included?

This is usually the first question asked by students when an e-portfolio is introduced as an assessment component of a course. The question may reflect not so much a desire to know more about the assignment than a nagging concern about workload. And though it is good practice to prescribe a minimum number of artefacts, one needs to constantly keep in mind the feasibility from the student’s perspective. On the other hand, it is not realistic to leave it to the student to decide how many artefacts he or she would like to include, as only one artefact may not reflect any development of the outcome being assessed over the course of the semester. The CoP experience as reflected in the case studies included in this volume points towards a number ranging from 3 to 5 artefacts in each category of the e-portfolio, depending on the length and time required to acquire each artefact. Finally, it is an individual decision which can be made more democratic by including the students in the decision-making. Asking them to commit to a certain number of artefacts, keeping in mind their individual workloads, fosters the sense of ownership as well as giving the teacher an insight into what the student has actually accomplished given his or her other semester commitments.

Question 3c: How Should the Artefacts Be Organised?

This question has two answers on two different ends of the spectrum of designs available for student e-portfolios. One is that the organisation of the portfolio is best left to the owner of the portfolio, and the other is that a template should be provided to the students where categories to organise the artefacts are pre-determined according to the outcomes that the e-portfolio is intended to assess. The second answer has some obvious advantages. For newcomers, whether students or teachers, it is useful to have a structure or scaffolding on which to build up a portfolio. From the teacher’s perspective it helps to keep the outcomes in mind while designing and later assessing the portfolio. It also enables the teacher to present the outcomes better to the students who in turn are better able to understand the expectations of the portfolio. At the very beginners’ level where an e-portfolio is being used for the first time, detailed prompts could also be provided in addition to the categories to let the students know what exactly is meant by each category and what types of artefacts are expected from them in a particular category. This kind of scaffolding serves not only to ease the transition into a portfolio-based assessment but also serves as a learning process as to how e-portfolios could look and be organised, a skill that is then transferable to other contexts where an e-portfolio might be used. As the expertise increases and more experience in working with portfolios is gained, such scaffolding can gradually be removed, and the user can eventually decide for himself or herself how he or she would like to organise the portfolio. At this point he or she assumes full ownership of the portfolio.

Generally, the broader the target audience for a portfolio, the less the amount of scaffolding one should use. Whether it is the number of artefacts, their nature or the organization of the end product, less scaffolding is more opportunity for the user to showcase his or her skills and competencies. In the present volume, portfolios showcasing the GE experience in general consciously did not prescribe a template but gave examples of similar portfolios which enabled students to identify the areas they wanted to highlight in their GE portfolios and gave them the space to explore the possibilities (Hodgson in this volume). For purposes of assessments linked to specific competencies which are pre-defined at the institutional or course level, pre-structured portfolios enable a more granular insight into student progress and development, e.g. using student-facing learning analytics (Ellis in this volume).

Question 4: How Should You Assess E-Portfolios?

Assessment of e-portfolios has been discussed in the literature at length (e.g., Bhattacharya and Hartnett 2007; Barrett 2007; Lorenzo and Ittelson 2005a). Through this discussion certain propositions emerge which one must keep in mind while taking up by far the most challenging part of implementing e-portfolios. Barrett (2007) proposes that while assessing e-portfolios one must differentiate between assessment for and assessment of learning (442). The latter is high-stakes, institutionally prescribed summative assessment, and the former is meant to improve learning and is essentially formative (Barrett 2007: 444). This narrative of the e-portfolio assessment being either summative or formative has become more or less established, leading to the dichotomy of developmental or learning portfolios (Barrett 2007), and showcase or assessment portfolios (Lorenzo and Ittelson 2005a).

On the surface most of the e-portfolios discussed in this volume belong to the latter category of showcase or assessment portfolios as they are prescribed by the institution (even though only at the course level) and are part of the assessment scheme of the particular course and so have to be awarded a grade at the end of the semester. However, it might be wrong to call them positivist as opposed to constructivist (Paulson and Paulson 1994: 8) in a stricter sense, as the process of selecting, organising and presenting the artefacts can still involve a constructivist approach where meaning (of the external GA) could be constructed and students are free to choose or create artefacts that they deem most suited to the GA being assessed in that course. Barrett suggested in 2007 that “in order to approach a balanced solution we must envisage a system that makes it easy for students to maintain their own digital archive of work […]. Students can then draw from the same collection of evidence as they respond to and create showcase portfolios” (p.440). This vision is already reality in 2016. The implementation of e-portfolios for pre-service teachers in The Graduate School of Education of the University of Western Australia, which actually prescribes a developmental as well as a showcase e-portfolio, is a shining example (Oakley et al. 2013). E-portfolio management systems such as Mahara and MyPortfolio, which were the two main platforms used for the CoP, enable users to maintain a repository of artefacts which can be drawn upon to create showcase or assessment e-portfolio as the need arises. When these systems are used in conjunction with institutional Learning Management Systems (LMS) they can also automatically import online assignments into the e-portfolio of the user. Mahara can be plugged into the Moodle LMS and MyPortfolio is built into the Blackboard LMS.

Assessment portfolios are best assessed using a specially constructed rubric fit for the purpose (Bhattacharya and Hartnett 2007). The rubric enables the teacher to assess the portfolio using criteria which have been formulated to describe the skills or outcomes which the e-portfolio is supposed to assess. Sharing the rubric with the students gives them an additional orientation and explains to them what a particular skill or outcome means. In a more democratic process which would make the formative component stronger, one can discuss the skill descriptors of the rubric with the students. The CoP opted to develop a rubric for the core competencies that its members had identified as being relevant to e-portfolios in almost all disciplines. The result was a generic transdisciplinary rubric resulting from a multidisciplinary effort to implement e-portfolios in individual courses (see Appendix B at the end of this volume). The assessment competencies were identified to be Presentation, Reflection, Information Literacy and Critical Thinking. The idea was that teachers would already have the descriptors for the core competencies ready when they embarked upon the e-portfolio experiment and would add to the rubric their discipline’s own competencies which the e-portfolio should showcase. They could also remove any of the four core competencies if considered irrelevant.

Question 5: What Electronic Platform Should You Use?

The instinctive web 2.0 answer to this question is “the platform that is easiest to use”. Though simplistic this is not an answer that one should just ignore for more sophisticated ones. Using the Technology Acceptance Model (Davis 1989), Shroff et al. have shown empirically that “when students perceive the e-portfolio system as one that is easy to use and nearly free of mental effort, they may have a favourable attitude towards the usefulness of the system” (2011: 610). This is also an important insight which the CoP arrived at after testing four different platforms commonly used as e-portfolio platforms. More importantly, ease of use is a criterion which is relevant to both teachers and students and almost always the first criterion in terms of buy-in and usage for both parties. This is because when it comes to using web 2.0 applications, it is easy to fall into the trap of the digital natives versus digital immigrants divide, which automatically puts teachers on the defensive and assumes magical digital powers in students, though empirical evidence does not support the existence of such a divide. So teachers frequently put in hours of work trying to master the digital platform, often forgetting that students might have to do the same but might not share the same level of motivation especially if the purpose is not yet clear enough.

The CoP tested four different platforms on four different criteria, namely (i) ease of use, (ii) compatibility with the institutional LMS, (iii) fit for purpose (including aesthetics) and (iv) ownership (can the user take the portfolio with him/her?). The first criterion, ease of use, has been discussed above. The question of compatibility with the institutional LMS are important in light of the simple fact that if students and teachers are logged on to the same LMS for their teaching and learning purposes, it might be easier for them to use a built-in e-portfolio system that links to that LMS. Apart from the obvious advantage that no separate log-ins are required, built-in e-portfolio systems also enable students and teachers to seamlessly use their electronically submitted assignments as artefacts for the e-portfolio. Further, as LMS are locked down within the university community, it is an important safeguard for new users against copyright infringement issues, as the teacher can intervene if such infringements are suspected, before the e-portfolio is shared for use outside the course or university domain. Very often LMS-based e-portfolios are the only option which the institution offers, considering costing and logistics involved in hosting and maintaining an entirely different platform exclusively for e-portfolios, especially at the piloting stage as in the case of the HKBU. But such a portfolio platform might not be fit for purpose as it might offer very few tools for organization, presentation or sharing. It might not also be aesthetically pleasing, not offering an adequate number of themes, templates and customization possibilities. Last but not least, it might not enable peer sharing or interaction. On the other hand a simple standard template might be advantageous at the start as it is easy to use and enables both students and teachers to concentrate on the content rather than on the appearance. E-portfolios plugged into the institutional LMS have often an issue with ownership. If an e-portfolio is an instrument of lifelong learning or a showcase for future employers, the students must have complete ownership. However, many institutions do not allow students to take their e-portfolios with them. While some of them allow a certain grace period using an archival system, others might completely block access upon graduation.

It is therefore important to keep in mind how an e-portfolio initiative could be sustained beyond the university experience, and the choice of platforms plays a central role in this issue. The following tableFootnote 2 gives an overview of the four platforms tested by the CoP, mapped against the four criteria mentioned above. It is interesting to note that it is the commercial websites such as Google sites or Weebly, which offer the most flexibility and features in regard to the CoP criteria. A major weakness however is that commercial platforms, especially Weebly or Wix, which are essentially website builders, do not provide much scope for peer commentary or group sharing in their basic features. Also, not being locked down within a university domain, they expose their owners to the dangers of the open web such as copyright or liability issues, leaving them open to potential lawsuits and other risks (Table 1.1).

Table 1.1 Overview of e-portfolio platforms

Conclusion

Given the amount of literature and good practice examples now available on e-portfolios for student learning, embarking on the e-portfolio experiment can often be a daunting task for individual course leaders or teachers. This is partly because of the very high-level outcomes often associated with e-portfolios, such as lifelong learning or reflective practice. Also, e-portfolios have traditionally been a humanities domain thought to be particularly useful for writing courses and education programmes, or for documenting internship experiences. But as the demand for outcomes-based education and evidence-based assessment grows, especially in the Asia-pacific region, it is essential to explore the affordances of e-portfolios further and to make them more accessible to a wider community of teachers and practitioners as well as students. The community of practice at the HKBU set out to do exactly this, with a multidisciplinary approach as opposed to a transdisciplinary one, so as not to gloss over the details of the e-portfolio implementation process but rather to be able to concentrate on them, in order to reach a maximum number of engaged practitioners.

The CoP experience helped us to break down the implementation process of e-portfolios into its most essential components, which have been discussed in this article. It also gave us an insight into students’ perceptions of and teachers’ problems with e-portfolios. It brought to light some essential facts which contribute to a low buy-in rate. Like all new teaching and learning initiatives, initiating and sustaining an e-portfolio approach takes up enormous amounts of time both from the teachers’ and the students’ points of view. Course-embedded e-portfolios cannot therefore be extra-curricular activities but must replace existing and (maybe not so effective) assessment methods such as final exams or term papers, or they might be used to ease the load of such assignments by getting the students to work ahead and prepare to avoid end-of-semester stress. Buy-in can also be ensured by making the e-portfolios legitimate showcases of student learning by giving the students a say in assessing them, both in terms of peer or self-assessment and by giving them an opportunity to negotiate the items of the assessment rubric. A mutually negotiated rubric could serve the dual purposes of giving the users more ownership as well as more orientation. An e-portfolio initiative also cannot be static like most other assessment methods. It must evolve with the needs of the students and the institution. Collecting feedback from students is therefore essential to keep an eye on whether the e-portfolio implementation is indeed working for the students. Last but not least, since e-portfolios are not purely a summative form of assessment but work better when used as a formative tool, it is essential to mentor and scaffold the initial e-portfolios created by students. Regular sharing and submissions throughout the semester go a long way in understanding what direction the e-portfolios are going in before it is too late.

We sincerely hope that the case studies included in this volume will help to allay some initial reservations against e-portfolio practice, and encourage those already thinking about it to take the all-important first step.