Keywords

1 Introduction

Massive Open Online Courses (MOOCs) represent a new Technology Enhanced Learning (TEL) model that has succeeded in incorporating video-based lectures and new ways of assessment in courses that are offered on the Web for a huge number of participants around the globe without any entry requirements or tuition fees, regardless of their location, age, income, ideology, and education background [43]. Different types of MOOCs have been introduced in the MOOC literature. Daniel [8] classified MOOCs into connectivist MOOCs (cMOOCs) and extension MOOCs (xMOOCs). The vision behind cMOOC is based on the theory of connectivism, which fosters connections, collaborations, and knowledge sharing among course participants. The second type, xMOOCs is following virtue of behaviorism and cognitivist theories with some social constructivism aspects. xMOOC platforms were developed by different elite universities and usually distributed through a third party provider such as Coursera, edX, and Udacity.

Much has been written on MOOCs about their design, effectiveness, case studies, and the ability to provide opportunities for exploring new pedagogical strategies and some business models. In fact, most of existing MOOCs are especially interesting as a source of high quality content including video lectures, testing, forms of discussion and other aspects of knowledge sharing. Despite their popularity and the large scale participation, a variety of concerns and criticisms in the use of MOOCs have been raised. Yousef et al. [43] in their comprehensive analysis of the MOOC literature reported that the major limitation in MOOCs is the lack of human interaction (i.e. face-to-face communication). Furthermore, one important obstacle that prevents MOOCs from reaching their full potential is that they are rooted in behavioral learning theories. In other words MOOCs so far still follow the centralized learning model using the traditional teacher-centered education that controls the MOOCs and its activities. Efforts in student-centered MOOCs, based on connectivism and constructivist principles that emphasize the role of collaborative and social learning are exceptions but not the rule [43]. Other researchers point out concerns about the limitations of MOOCs. These concerns include pedagogical problems concerning providing the participants with timely, accurate, and meaningful feedback of their assignments tasks [16, 21]; lack of interactivity between learners and the video content [15]; high drop-out rates, on average 95 %, of course participants [6, 8]. Plausible reason for the latter problem might be the complexity and diversity of the participants. This diversity is not only related to cultural and demographic attributes, but also takes into account individual motives and perspectives when enrolled in MOOCs [41].

In order to address these limitations, a new design paradigm emerges, called blended MOOCs (bMOOCs) is increasingly discussed in the MOOC community. Blended learning has been widely identified as a combination of face-to-face and online learning activities. As an instance of blended learning, bMOOCs aim at bringing in-class (i.e. face-to-face) interactions and online learning components together as a blended environment, taking into account the important openness factor in MOOCs [2, 12, 26]. The bMOOC model has the potential to bring human interactions into the MOOC environment, foster student-centered learning, support the interactive design of the video lectures, provide effective assessment and feedback, as well as contemplate the diverse perspectives of the MOOC participants.

However, the ability to evaluate a large scale of participants in MOOCs is obviously a big challenge [37]. The current versions of MOOCs are used traditional assessment methods. These include e-tests, quizzes, multiple-choice and short answer questions [10, 18]. These methods are limited in evaluating learners in open and distributed environment effectively. Moreover, these methods are relatively easy to apply in science curricula courses. However, it is difficult to apply them in humanities curricula courses, mainly due the nature of these courses, which are based on the creativity and imagination of the learners [31]. This provides strong ground for alternative assessment methods that provide effective and constructive feedback to MOOCs participants about their open-ended exercises, or essays.

The generic aim of most assessment methods is to provide such kind of feedback usually involve teaching staff correcting and grading the assignments. In the MOOCs scenarios, this requires substantial resources in terms of time, money, and manpower. To alleviate this problem, we argue that the most suitable way is to look for assessment methods that employ the wisdom of the crowd. Such assessment methods include portfolios, wrappers, self-assessment, group feedback, and peer assessment [5, 9].

Learner’s portfolio is an approach to authentic assessment that potentially enables large classes to reflect on their work [23]; wrapping assessment techniques use a set of reflective questions to engage participants in self-assessment and self-directed learning [38]; self-assessment can be used to prompt learners’ reflection on their own learning outcomes; and peer assessment refers crowdsourcing grading activities where learners can take responsibility for rating, evaluating, and providing feedback on each other’s work [34].

We considered these different crowdsourcing assessment activities, and concluded that the most suitable assessment method in our scenario is to involve the learners themselves under supervision and guidance from the teachers. We think that peer assessment activities that involve learners themselves in the assessment process can play a crucial role in supporting an effective MOOC experience. So far, little research has been carried out to investigate the effectiveness of using peer assessment in a bMOOC context [5, 33]. In an attempt to handle this assessment issue, this paper presents in details a study conducted to investigate the effectiveness of using peer assessmenton learners’ performance and satisfaction in the bMOOC environment L2P-bMOOC.

2 L2P-BMOOC: First Design

Current MOOCs suffer from several critical limitations, among which are the focuses on the traditional teacher-centered model, the lack of human interaction, as well as, the lack of interaction between learners and the video content [15, 19, 44].

L2P-bMOOC is a bMOOC platform on top of the L2P learning management system of RWTH Aachen University, Germany. It was designed and implemented to address these limitations. L2P-bMOOC supports learner-centered bMOOCs by providing a bMOOC environment where learners can take an active role in the management of their learning activities, thus harnessing the potential of bMOOCs to support self-organized learning [4]. L2P-bMOOC fosters human interaction through face to face communication and scaffolding, driven by blended learning approach. The platform includes a video annotation tool that enables learners’ collaboration and interaction around a video lecture to engage the learners and increase interaction between them and the video content. This means that L2P-bMOOC embodies evolved concept which differs from traditional MOOC environments, where learners are limited to viewing video content towards collaborative and dynamic one. Learners are encouraged to organize their learning, collaborate with each other, create and share their knowledge with others.

In L2P-bMOOC, video lectures are structured, collaboratively annotated in mind-map representation. Figure 1 shows the workspace of L2P-bMOOC which consists of a course selection section, an unbound canvas representing the video map structure of the lecture, and a sidebar for new video node addition and editing of video properties. Possible actions on a video node include video annotations, video clipping, social bookmarking (i.e. attaching external web feeds), and collaborative discussion threads [40].

Fig. 1.
figure 1

L2P-bMOOC workspace.

The annotation section of video nodes is displayed in a separate layer above the main page and can be opened by clicking the “Annotation icon @” attached to map nodes. It consists of three main blocks: Interactive timeline, list of existing annotations and creation form for new annotations (see Fig. 2). The interactive timeline visualizing all annotations is located right under the video and is synchronized with the list of complete annotations. By selecting timeline items users can watch the video directly starting from the part to which the annotation points to. The timeline range corresponds to video duration and can be freely moved and zoomed into. Timeline items also include small icons that help to distinguish three annotation types: Suggestion, Question and Marked Important.

Fig. 2.
figure 2

Video annotation panel in L2P-bMOOC.

As pilot test for this platform was the bMOOC “Teaching Methodologies” course delivered by the Fayoum University, Egypt in cooperation with RWTH Aachen University. It started in March 2014 and ran for eight weeks. This course was offered both formally to students from Fayoum University and informally with open enrollment to anybody who was interested in teaching and learning methodologies. At the end of the course, there were 128 active participants. 93 were formal participants who took the course to earn credits from Fayoum University. These participants were required to complete it and obtain positive grading of assignments. The rest were informal participants undertaking the learning activities at their own pace without receiving any credits. The teaching staff provided six video lectures and the course participants have added 27 related videos. The course was taught in English and the participants were encouraged to self-organize their learning environments, to present their own ideas, collaboratively create video maps of the lectures, and share their newly-acquired knowledge through social bookmarking, annotations, forums, and discussion threads [42].

To evaluate whether the platform supports and achieves the goals of “network learning” and “self-organized learning”, we designed a qualitative study based on a questionnaire. This questionnaire utilized a 5-point Likert scale with range from (1) strongly disagree, to (5) strongly agree. We derived the results and reported conclusions based on the 50 participants who completed and submitted the questionnaire by the end of the survey period. The results obtained from this preliminary analysis are summarized in the following points:

The collaboration and communication tools (i.e. group workspaces, discussion forums, live chat, social bookmarking, and collaborative annotations) allowed the course participants to discuss, share, exchange, and collaborate on knowledge construction, as well as, receive feedback and support from peers.

The results further show that the majority agreed that L2P-bMOOC allowed them to be self-organized in their learning process. In particular, the participants reported that it helped them to learn independently from teachers and encouraged them to work at their own pace to achieve their learning goals.

The study, however, identified two problems concerning assessment and feedback. The participants had some difficulties in tracking and monitoring their learning activities and those of their peers. The second issue that pointed out was the limited ability to evaluate and give effective feedback for their open-ended exercises [42].

A possible solution for the first problem was the introduction of learning analytics features. These features can improve the participants’ learning experience through e.g. the monitoring of their progress and supporting (self)-reflection on their learning activities. To alleviate the second problem, we opted for peer assessment. As motivated in the previous section, one possible scenario for peer assessment is the evaluation of assignment that cannot be corrected automatically, such as open-ended exercises and essays.

In August 2014, we conducted a second case study to evaluate the usability and effectiveness of the learning analytics module. The focus of this study was to examine to which extent this module supported personalization, awareness, self-reflection, monitoring, and recommendation in bMOOCs [39]. What still remains unclear is how to leverage peer assessment in bMOOCs. The paper at hand investigates the application of peer assessment in bMOOCs. It aims to address the following research questions:

  • What is the learners’ perception of satisfaction with the usability of the peer assessment module in L2P-bMOOC?

  • Does the peer assessment module improve learning outcomes?

  • Does the peer assessment module provide a reliable and valid feedback for participants?

  • Which peer assessment model fits best in a bMOOC context?

  • What are the future research opportunities in the area of peer assessment that should be considered in the development of bMOOCs environments?

3 Peer Assessment in MOOCs

Assessment and feedback are essential parts of the learning process in MOOCs. Collecting valid and reliable data to grade learners’ assignments; identifying learning difficulties and taking action accordingly; and using these results, are just a portion of the measures to improve the academic experience [20]. Many MOOCs use automated assessments (e.g. multiple-choice questions, quizzes) which strongly focus on the cognitive aspects of learning. The key challenge of automated grading in MOOCs is inability to capture the semantic meaning of learners’ answers; in particular on open-ended questions [20].

On the other hand, peer assessment is a promising alternative evaluation strategy in MOOCs, as a critical evaluation method for scaling the grading of open-ended assignments [28]. Peer assessment represents a shift from a teacher-directed perspective to one where learners can be actively involved in the assessment loop [27]. This method of assessment is suitable for activities, like exercises, assignments, or exams which do not have clear right or wrong answers in humanities, social sciences, and business Studies [27]. Several studies have been conducted to investigate the impact of using peer assessment in traditional classroom instruction, and acknowledged a number of distinct advantages. These include: increase in learners’ responsibility and autonomy, new learning opportunities for both sides (i.e. givers and receivers of work review), enhanced collaborative learning experience, and strive for a deeper understanding of the learning content [34, 35].

Unfortunately, so far, there has been little discussion about using peer assessment in MOOCs on humanities, social sciences, or business Studies. In the next section, we will discuss specifically how MOOCs providers are using peer assessment in their courses.

3.1 Coursera

Coursera has integrated a peer assessment system in its learning platform to evaluate and provide feedback for at least 3 to 4 assignments. Coursera provides learners with an optional evaluation matrix to improve peer assessment results. In addition, learners have the opportunity to self-evaluate themselves [21, 28]. The peer assessment system in Coursera involves three main phases: (1) submission phase, (2) evaluation phase, and (3) publishing results as shown in Fig. 2 [7]. Until recently, there has been no reliable evidence on how peer assessment affects the learning experience in Coursera (Fig. 3).

Fig. 3.
figure 3

Peer assessment in Coursera [7].

In several MOOCs offered by the Pennsylvania State University and hosted online by Coursera, learners reported that, they mistrusted the peer assessment results. Moreover, they outlined some of the issues of peer assessment, such as the lack of peers’ feedback, accuracy, and credibility [33].

3.2 edX

In Peer assessment in edX, exists in a very similar fashion like in Coursera. In the case of edX peer assessments, learners are required to review a few assignments samples that have already been graded by the professor before evaluating their peers. After learners proved that they can assign grades similar to those given by the professor, they are permitted to evaluate each other’s work and provide feedback, using the same rubric [11] (Fig. 4).

Fig. 4.
figure 4

Peer assessment rubrics in edX [11].

3.3 Peer Assessment Issues in MOOCs

The Peer assessment is valuable evaluation method used to facilitate learners for receiving deeper feedback on their assignments but it is not always as effective as expected in MOOCs scenarios [33]. Jordan [17] shows that MOOCs which used peer assessments tend to have lower course completion rates compared to the ones that used automated assessment. In general, there are several possible factors that can explain the lack of effectiveness of peer assessment in MOOCs:

  • The issue of scale [33].

  • The diversity of reviewers’ background and prior experience [41].

  • The lack of accuracy and credibility of peer feedback [33].

  • The lack of transparency of the review process.

  • MOOCs participants do not trust the validity and reliability of peer assessment results due to the absence of a clear evaluation authority (e.g. teacher).

  • The low perceived expertise; i.e. peer feedback is not always as effective as teacher feedback [22].

  • Peer assessment in MOOCs employs fixed grading rubrics. Obviously, different exercise types require different assessment rubrics [30]

4 Peer Assessment in L2P-BMOOC

In this paper, we focus on the application of peer assessment from a learner perspective to support self-organized and network learning in bMOOCs through peer assessment rubrics. In the following sections, we discuss the design, implementation, and evaluation of the new peer assessment module in L2P-bMOOC.

4.1 Requirements

In order to enhance L2P-bMOOC with a peer assessment module, we collected a set of requirements from recent peer assessment and MOOCs literature [14, 33, 43]. Then, we designed a survey to collect feedback from different MOOC stakeholders concerning the importance of the collected requirements. The demographic profile of this survey was distinguished into professors and learners as follows:

  • Professors: 98 professors who had taught a MOOC completed this survey. 41 % from Europe, 42 % from the US and 17 % from Asia.

  • Learners: 107 learners participated in the survey. A slight majority of these learners were males (56 %). The learners’ ages ranged from 18 to 40+, with almost 65 % between the ages of 18 and 39. 12 % High school and other levels of studying, 36 % were studying Bachelor, 40 % Master’s, 12 % PhD. All of them had taken one or more online courses, and 92 % had participated in MOOCs. These learners came from 41 different countries and cultural backgrounds in Europe, US, Australia, Asia, and Africa. A summary of the survey analysis results are presented in Table 1.

    Table 1. L2P-bMOOC peer assessment requirements (N = 205).

The agreeability means of peer assessment requirements is quite high at above 4. In particular, indicators 3 and 5 call for specific, albeit flexible guidelines and rubrics. This is important to avoid grading without reading the work, or not following a clear grading scheme, which negatively impacts the quality of the given feedback [44].

Based on the peer assessment literature review and the survey results, we derived a set of requirements to support peer assessment in L2P-bMOOC, as summarized below:

  • User Interface: The interface should be simple, understandable, and easy to use while requiring minimal user input. The interface design of the module should take usability principles into account, and go through a participatory design process [24].

  • Rubrics: Provide learners with flexible task-specific rubrics that include descriptions of each assessment item to achieve fair and consistent feedback for all course participants.

  • Management: Peer assessment should be easy to manage. The module ought to be integrated into the platform with features for activation and deactivation.

  • Scalability: The fundamental difference between MOOCs and traditional classroom is the scale of learners. Consequently, scalability should be considered in the implementations of peer assessment module in L2P-bMOOC.

  • Collaborative Review: Provide mechanisms for a collaborative review process which involves the input of more than one individual participant.

  • Double Blind Process: Peer assessment module should support the double blind review process. Neither the assignment authors know the reviewers identities, nor the reviewers know the assignment authors identities.

  • Deadlines: Peer assessment module should provide two deadlines for each task: the submission deadline for learners to submit their work, and the other for the peer grading phase.

5 Implementation

The peer assessment module in L2P-bMOOC consists of the six components as shown in Fig. 5. These peer assessment components are classified according to the following methods:

  • Teachers need methods to define assignment tasks and manage the review process.

  • Learners need methods to see assignment tasks and submit solutions, as well as, to provide and receive peer reviews.

Fig. 5.
figure 5

Peer assessment workflow.

Form the technical perspectives we used Microsoft SharePoint 2013 as underlying platform. SharePoint offers a solid base for MOOCs development, while offering a wide range of other advantages. These include scalability, security, customization and collaboration. The internal list structure of SharePoint makes it easy to implement fine grained rights on individual list items, which allow for easier rights management in L2P-bMOOCs peer assessment module. Basically, it is easy to configure who can see what on a given point in time.

5.1 Teacher Perspective

The peer assessment module in L2P-bMOOC consists of a centralized place of actions (navigation ribbon) to help teachers to define, manage, and navigate the assignment tasks, as shown in Fig. 6.

Fig. 6.
figure 6

Teacher navigation ribbon.

The ribbon actions provide complete set of tools to define peer assessment tasks, manage task-specific rubrics, assign reviewers, give final grades, and publish the results.

5.1.1 Task Definition with Rubrics

The task definition begins with defining some basic attributes of the assignments. These attributes include the name and description, the deadlines, and the associated materials and resources. Additionally, there are a number of specific settings to be configured, which are related to the peer assessment itself. These specific settings are concerning the start and end of the review, the review impact on the final grade, and the task-specific rubrics (see Fig. 7).

Fig. 7.
figure 7

Task definition with rubrics.

There are well documented results that provide methods to enhance the effectiveness of peer assessment by asking direct questions for the peer to answer, in order to assess the quality of work by the author [13]. This way, the reviewer can easily reflect on the quality of work in a goal-oriented manner. Hence, we implemented a rubric system that allows tutors to define specific questions related to each task, and also reuse pre-defined rubrics. The process for defining rubrics is included in the task definition itself. Typical rubric has two attributes: name and the actual rubric question. Further, it contains descriptions that define the learning outcome and performance levels to provide enough information to guide learners in doing the peer assessment review. Teachers can select multiple rubrics to associate with an assignment definition, as shown in Fig. 8.

Fig. 8.
figure 8

Managing rubrics.

Once the assignment task has been defined, an automated workflow takes care of publishing the assignment at the specified time along with submission deadline. Meanwhile, another workflow takes care of the review submission after the review start date.

5.1.2 Assigning Reviewers

Course teachers can use this feature to assign solutions submitted by learners to different learners for reviewing. Teachers can simply select a learner from a list and assign any solution to him or her for review as shown in Fig. 9, future possibility to upgrade the system would be to automate the distribution process. There are mechanisms to reverse the process, if there is a problem or a mistake. After this, the assigned reviews are visible to the learners according to the specified dates, and if any review assignment is made after the review start date, it would be shown to the learners directly.

Fig. 9.
figure 9

Assigning reviewers.

5.1.3 Teacher Grading

Teachers have the option to grade the submitted solutions, but this is not mandatory. They could only assign a grade to learners taking the peer reviews into account, as shown in Fig. 10.

Fig. 10.
figure 10

Teacher grading.

5.1.4 Publishing Grades

After grading all the solutions, teachers can publish the results to the learners at once using an action from the ribbon. As a result, the learners are able to see the correction from the teachers as well as the reviews submitted by their peers.

5.2 Learner Perspective

The navigation ribbon encompasses actions to help learners to submit solutions and perform the peer assessment task.

5.2.1 Submitting Solutions

Once the assignment has been published, the learners can see the details of the assignment and work on their solutions until the proposed deadline. Learners can add a solution by adding a description and uploading their documents and resources relevant to the solution. Learners can work individually, or in groups, depending on the assignment’s requirements (see Fig. 11).

Fig. 11.
figure 11

Submitting solutions.

5.2.2 Peer Assessment (Review)

There are a number of peer assessment methodologies dealing with the anonymity of author and reviewer, e.g. Single Blind Review (reviewer is anonymous, author is known), Double Blind Review (Both reviewer and author are anonymous) and lastly the Open Review (No anonymity). For the purpose of this implementation we decided to use the Double Blind Review, as it reduces the chances of biased marking [32].

Once the peer review phase starts, the learners can see a list of reviews assigned to them by the teachers. The interface for adding a review can be seen in Fig. 12. It contains two sections, the submitted solution on the top and the review section with rubrics at the bottom. The reviewers can see the documents and resources attached to the solution and any comments given by the authors. They can add their comments against the rubric questions in the review section along with an option to upload any files and grade the review as well.

Fig. 12.
figure 12

Peer assessment (review) interface.

6 Case Study

In October 2014, we conducted a third case study to investigate the usability and effectiveness of the peer assessment module. We used the enhanced edition of L2P-bMOOC to offer a bMOOC on “Education and the Issues of the Age” at Fayoum University, Egypt in cooperation with RWTH Aachen University. The course was offered both formally to students from Fayoum University and informally with open enrollment to anyone who is interested in teaching and educations issues. The teaching staff is composed of one professor and one assistant researcher from Fayoum University as well as one assistant researcher from RWTH Aachen University. A total of 133 participants completed this course. 92 are formal participants who took the course to earn credits from Fayoum University. These participants were required to complete the course and obtain positive grading of assignments. The rest were informal participants who didn’t attend the face-to-face sessions. They have undertaken the learning activities at their own pace without receiving any type of academic credits. The teaching staff provided nine short video lectures and the course participants added another 25 related videos. Participants in the course were encouraged to use video maps to organize their lectures, and collaboratively create and share knowledge through annotations, comments, discussion threads, and bookmarks. Participants used the peer assessment module for the submission of a team project report. After the submission, every team reviewed other’s work and provided their feedback based on the rubric questions provided by the teaching staff. These reviews were then taken into consideration by the teaching staff while compiling their own feedback of the team projects. Once the teacher reviews were completed the final corrections were made public to the students who could see both reviews for their own project namely, the review from peer and the review from the teacher.

7 Evaluation

This case study, therefore, conducted a thorough evaluation of the peer assessment module in L2P-bMOOC in order to answer the main research questions in this work. The aim was to evaluate the usability and effectiveness of the module, including the impact on learning outcome and the quality of feedback. Our endeavor was also to investigate which peer assessment model fits best in a bMOOC context. We employed an evaluation approach based on the ISONORM 9241/110-S as a general usability evaluation as well as a custom questionnaire to measure the effectiveness of peer assessment in L2P-bMOOC.

7.1 Usability Evaluation

The purpose of usability evaluation is to measure learner’s satisfaction with the peer assessment module as well as to identify the issues for improvement. The ISONORM 9241/110-S questionnaire was designed based upon the International Standard ISO 9241, Part 110 [29]. We used this questionnaire as a general usability evaluation for the peer assessment module. It consists of 21 questions classified into seven main categories. Participants were asked to respond to each question scaling from (7) a positive exclamation and its mirroring negative counterpart (1).

The questionnaire comes with an evaluation framework that computes several aspects of usability to a single score between 21 and 147. A total of 57 out of 133 participants completed the questionnaire. A diversity in learner’s age was exhibited by the evaluators, their ages ranging from 18 to 40+ years with almost 65 % of the evaluators being between the ages of 18 and 24. Around 70 % of the evaluators were Bachelors students, 17 % from Masters Courses and the remaining 12 % pursuing a PhD. All of them had taken one or more online courses. The results obtained from the ISONORM 9241/110-S usability evaluations are summarized in Table 2.

Table 2. ISONORM 9241/110-S evaluation matrix (N = 57).

The overall score was 99.1 which translate to “Everything is all right! Currently there is no reason to make changes to the software in regards of usability” [29]. This result reflects a high level of user satisfaction with the usability of peer assessment module in L2P-bMOOC.

7.2 Effectiveness Evaluation

This study has focused on peer assessment to support groups or individuals to review, grade and provide in-depth feedback for their peers, based on flexible rubrics. The effectiveness evaluation aims at investigating the impact on learning outcomes and the quality of feedback as well as identifying the best peer assessment models in bMOOCs (Fig. 13).

Fig. 13.
figure 13

Effectiveness evaluation dimensions.

This study included the design of a questionnaire adapted from [20, 36]. The questionnaire consisted of two main parts. The first part containing 21 items in the two categories mentioned above as illustrated in Table 3. The second part aimed at exploring the most effective peer assessment model in a bMOOC setting, as presented in Table 4. To ensure the relevance of these questions, a pre-test was conducted with 5 learners and 5 learning technologies experts. Their feedback included a refinement of some questions and replacing some others. The revised questionnaire was then given to the “Education and the Issues of the Age” course participants.

Table 3. The effectiveness evaluation of peer assessment in L2P-bMOOC (N = 57).
Table 4. Peer assessment models in bMOOCs.

7.2.1 Impact on Learning Outcome

Respondents were asked to indicate whether the peer assessment has affected their learning outcome. As can be seen from Table 3, the overall response to the evaluation items 1–9 was very positive at 4.3 with acceptable standard deviation at 0.52. This indicates that peer assessment is a powerful evaluation method to detect and correct errors, reflect, and criticize which are key elements in double-loop learning. The concept of double-loop learning was introduced by Argyris and Schön [1] within an organizational learning context. According to the authors, learning is the process of detecting and correcting errors. Error correction happens through a continuous process of inquiry, reflection, and (self-) criticism, which enables learners to test, challenge, and eventually update their knowledge, and in so doing improving their learning outcome [3].

Peer assessment further fosters continuous knowledge creation, which is a prerequisite for effective learning [25]. This can be attributed to the fact that in the peer assessment process, learners can learn from either negative or positive aspects of peer’s work and make use of them to get in-depth understanding of the learning topic and improve their knowledge, which leads to an enhancement of their learning performance.

7.2.2 Quality of Feedback

Key issues in peer assessment include the diversity of reviewers’ background and prior experience [41], the lack of accuracy and credibility of peer feedback [33] as well as the lack of transparency of the review process. Moreover, MOOCs participants do not trust validity and reliability of peer assessment results due to the absence of a clear evaluation authority (e.g. teacher) and the low perceived expertise of students [22].

Rubrics provide a possible solution to overcome these issues by offering clear guidelines when assessing peer’s work. Items 10 to 21 in Table 3 are concerned with the quality of the rubric-based peer feedback approach employed in L2P-bMOOC. In general, the respondents agreed that harnessing rubrics had a positive impact on the quality of the peer assessment task, in terms of the accuracy and credibility of peer feedback (item 11), transparency of the review process (item 20), as well as validity and reliability of peer assessment results (item 10 and 12). Moreover, the study revealed that participants became more confident in their ability to assess peers’ work. They confirmed that following clear rubrics helped them understand the evaluation criteria and supported them in providing peers with detailed feedback.

7.3 Peer Assessment Models

One important goal in our study was also to investigate which peer assessment model fits best in a bMOOC context, as presented in Table 4.

From these results we can draw certain conclusions about the most effective peer assessment practices in bMOOCs as follows:

Time: :

Optimal feedback should be provided early in the assessment process in order to give learners the opportunity to react and improve their work.

Anonymity: :

An important aspect of peer assessment is to ensure the anonymity of the feedback. This way, reviewers can provide critical feedback and grading without considering interpersonal factors e.g. friendship bias or personal dislikes.

Delivery: :

Indirect feedback ensures more effective assessment results as learners feel more comfortable to give honest feedback without any influence from peers.

Peer Grading: :

Peer grading should only be a part of the final grade in order to ensure the validity of the assessment results.

Channel: :

Assessment results can be more accurate and credible when learners receive feedback from multiple reviewers rather than from a single one. This way, learners have the chance to receive a multifaceted feedback on their work.

Review Loop: :

Having multiple feedback iteration achieve a better learning outcome as learners can reflect on the assignment work multiple times.

Teacher Role: :

The teachers should still take an active role in the peer assessment process, by defining evaluation rubrics, providing sample solutions, and checking the peer review results.

8 Conclusion and Future Work

Massive Open Online Courses (MOOCs) have a remarkable ability to expand access to a large scale of participants worldwide, beyond the formality of the higher education systems. MOOCs use traditional assessment methods include, e-tests, quizzes, multiple-choice and short answer questions. These assessment tools are limited in evaluating learners in open and distributed MOOCs effectively. The main aim of this work was to determine how to assess the learners’ performance in MOOCs beyond traditional automated assessment methods. Peer assessment has been proposed as an effective assessment method in MOOCs to address this challenge. Although peer-assessment is helpful to strengthen the learners’ self-confidence and improve their own performance, this type of assessment was not widely used in the reviewed studies, mainly due to issues related to the lack of transparency of the review process as well as the lack of validity and reliability of the assessment results. In the paper at hand we presented the details of a study conducted to investigate peer assessment in bMOOCs. The study results show that flexible rubrics have the potential to make the feedback process more accurate, credible, transparent, valid, and reliable, thus ensuring the quality of the peer assessment task. Furthermore, early feedback, anonymity, indirect feedback, peer grading as only a part of the final grade, multiple channel feedback, multiple feedback loops, as well as a supplementary teacher role are the most effective assessment methods in bMOOCs.

However, each of the research questions followed in this study yields immediate, open research gaps that still exist, which should be considered for future work, especially: (a) improve grading accuracy and (b) understand which peer assessment scenarios affect learning outcomes in bMOOCs and how these scenarios can be supported. Recent evidence suggests inter-rater reliability to measure the extent of agreement among raters as a possible solution for improving grading accuracy. In order to develop a full version of peer assessment, additional studies are needed that consider several promising scenarios such as (a) variation in the peer assessment loops (b) variation in the review channels e.g. peer assessment could take place in pairs or groups, (c) variation in the peer feedback e.g. written vs. oral feedback, (d) variation in the pedagogical anatomy e.g. anonymous vs. open, and (e) variation in assessment tasks e.g. formative assessment vs. summative assessment.