Keywords

1 Introduction

We live in a world where the most valuable of all commodities is time. Many people urge to have more than 24 h each day, in order to be able to solve all their issues and fulfill all of their desires. This is affecting and transforming all areas of the society, including education. Higher Education in particular is more affected, mainly because students are also working adults. Fewer and fewer students have the chance of the time to participate in a three months mobility, exchange program. Mobilities in higher education are amongst the main topics with which the European Commission is concerned according to [1] and [2]. Mobility is considered an important part of higher education as it supports personal development and employability, fosters respect for diversity and a capacity to deal with other cultures, encourages linguistic pluralism underpinning the multilingual tradition of Europe and increases cooperation and competition between higher education institutions [1] and [2].

The Virtual Mobility Learning Hub is an innovative multilingual (in seven languages) ICT-based environment (as a directory of virtual mobility attributes) with the main plan to promote collaborative learning, connectivism social networking as an instructional method, open educational resources OERs as the main content, open digital credentials as recognition and validation of VM skills which can be applied to all ages and levels of digital education. It was created to help students living in different parts of the world to learn and collaborate together and also to have free access to some relevant Massive Open Online Courses (MOOCs). Therefore, online courses especially under the form of MOOCs, virtual learning environments, teacher mobilities have become solutions to the pressing problem of lack of time. Moreover, these solutions address other problems regarding mobilities, such as lack of resources and various disabilities.

The VMLH initial requirements were: to be built on a user-friendly interface, as well as a mobile interface, to encourage everyone to access it, engage in different open learning activities, connect with others and develop their VM competencies.

The experience of users is one of the most important things to consider when measuring the quality of a system, especially when evaluating a MOOC.

Usability is defined in the context of Human Computer Interaction as a “quality attribute that assesses how easy user interfaces are to use” [3]. In the context of MOOCs and LMSs (Learning Management Systems), usability defines the measure in which students can do the purposed tasks with efficiency, effectiveness and satisfaction [4].

Usability evaluations are comprised of several techniques which combine engineering, psychology and user research in order to determine the positive and negative usability aspects of a software, in order to improve it [5]. Five quality components define usability and can be measured: learnability, efficiency, memorability, errors and satisfaction [3].

Heuristic evaluation, cognitive walkthroughs, interviews, focus groups, surveys, user observation sessions and eye tracking are among the most used methods and techniques for measuring the usability of software systems [4].

The current paper describes the methods used to test the usability of VMLH, the results and conclusions that occurred.

2 Related Work

To the best of our knowledge, very few studies have been made on assessing the usability of MOOCs.

In [6], the authors develop a list of usability guidelines in the form of an adaptable usability checklist for evaluating the user interface of MOOCs. In [7], the authors propose and test a methodology for assessing user satisfaction of MOOCs, using techniques such as UMUX Lite, SUS questionnaires, Testbirds Company’s approach, and the ISO standards. In [8], the authors describe a usability evaluation of three popular MOOC platforms: edX, Coursera and Udacity. The methodology combined the user testing and questionnaires methods and involved 31 participants, its focus being more on the comparative side of the evaluation.

A lot more usability tests have been performed on LMSs, the basis for MOOCs.

The authors in [9] analyzed all the activity tools (Lesson, HTML page, Glossary etc) and blocks (People, Calendar, Online Users etc) present in the default installation of Moodle. The methodology consisted in an experience-based evaluation for web apps that combines heuristic evaluation, questionnaires and task-driven techniques. The study had 84 students, 8 teachers and 2 system administrators as participants. While the results indicated a good usability rating, some modules came clearly as limited, which is explainable because of the limited development of Moodle in the year of the study, 2008 (first version released in 2002, current version 3.8.0 released in 2019).

Another research [10] done in our department in 2012 focuses on the usability evaluation of the mobile display of a LMS platform. The tested platform is also Moodle, whose mobile application was in its infancy. The authors propose an evaluation framework which approaches usability from four perspectives: pedagogical usability (how an educational app supports students in their learning process), usability of the device (software and hardware issues that influence usability testing on mobile devices), usability of the content (the format and structuring of the learning content and how adapted they are to mobile displays) and usability of the mobile web interface (the elements and structure of the web interface, such as navigation). The authors mention several metrics, methods and guideline for usability testing of mobile LMS apps that need to be further tested and validated.

Other studies focus on specific aspects of evaluation of usability of LMS, such as on navigational aspects [11], or on specific universities or regions, such as [12, 13].

A systematic review of usability and user experience evaluations of LMS was published in 2019 [14]. The study analyzes 23 selected papers as relevant for the research and extracts overall aspects, identified by the authors of the original research, related to general usability and user experience, characteristics of LMS, activities/characteristics of the usability evaluation models and guidelines considered in other domains to evaluate usability. While the research is valuable in the sense that it provides a checklist-type of usability evaluation framework for LMS, it needs to be further refined through testing and validation.

3 Usability Testing of the Virtual Learning Mobility Hub

3.1 Methodology

To validate the Virtual Mobility Learning Hub, we identified three main research questions:

  • Q1. Can a Moodle LMS sustain fully open, online, not tutorized courses?

  • Q2. What are the experiences that real students might have as learners in the VMLH?

  • Q3. Are the OpenVM MOOCs error-free and ready to become available for the HEI market?

To be able to answer these questions and to deploy the OpenVM MOOCs in the HEI market, we decided to run a usability test with real users i.e. university students. They are among the HEI stakeholders, so they are a valid target group for the courses (Fig. 1).

Fig. 1.
figure 1

Introduction to the Media and Digital Literacy MOOC.

The VMLH started with 8 mini MOOCs, each composed of 3 courses for Foundation Level, Intermediate Level and Advanced Level. Seven of them, which are finalized, were analyzed and evaluated from a pedagogical and usability point of view. These courses are Media and Digital Literacy, Intercultural skills, Autonomy-driven Learning, Active Self-regulated Learning, Collaborative Learning, Networked Learning Course and Open mindedness. In total there were 21 courses evaluated. Figure 2 and Fig. 2 show some typical parts of the MOOCs that were evaluated.

Fig. 2.
figure 2

The Advanced Level of the Media and Digital Literacy MOOC.

The usability evaluation extended from April 2019 until November 2019. Some usability methods, such as the focus group, was done in one day, in the usability lab that was set up at the university, while others, such as the error testing, was done over a period of 2 months, from the participants’ home or office.

The study involved 139 participants (136 master students and 3 eLearning experts), all from the Politehnica University of Timisoara, Romania:

  • 2nd year students from the Multimedia Technologies Master – 21 persons

  • 2nd year students from the Communication, Public Relations and Digital Media Master – 17 persons

  • eLearning Experts (university professors) – 3 persons

  • 1st year students from various technical Master Programs - 98 persons

This study combined several usability testing methods [15]:

  1. 1.

    Focus Group – to answer Q1 and Q2

  2. 2.

    User Observation Sessions (as a combination of direct observation, think-aloud protocol, video-recorded observation, screen-logging observation and questionnaires) – to answer Q2 and Q3

  3. 3.

    Error Testing– to answer Q3

  4. 4.

    Survey (together with a written report) – to answer Q1, Q2 and Q3

  5. 5.

    Expert Review– to answer Q1 and Q3

Each of the usability methods is described below, together with who took part in it, when and where it took place and how it unfolded, concluding with the usability problems that it revealed.

3.2 Focus Group

A focus group is an informal method to assess the features of a user interface. Usually, focus groups lasts for approximately 2 h and are run by a moderator that conducts a discussion about the issues and concerns that the participants have after they tested the user interface of a product [16].

The authors organized a focus group with their students to see how they actually use the platform, by assigning key tasks to users and analyzing their performance and experience. They also had some discussions, following the tasks completion, using the focus group in order to discuss their feelings, attitudes and thoughts on the website and to reveal their motivations and preferences.

The students were told that the platform was developed to offer the possibility for users to have free access to some open educational resources such as MOOCs related to different topics. Testing of the platform was planned with the help of the master students, because they represent a target category of the focus group of this website.

The testing took place at the Multimedia Center which is part of the Politehnica University of Timisoara. The participants were 21 master students, aged between 23 and 26, most of them working on the IT industry, and using the computer multiple times a day (Fig. 3).

Fig. 3.
figure 3

The setup of the focus group

All of them were already familiar with MOOC platforms such as Coursera, Edx, Udacity and so on. Furthermore, some of them studied or worked in virtual teams or virtual mobilities.

As part of the focus group, the students had the task to try and create an account on the platform. The authors had some questions prepared in advance, mostly referring to how the students felt about the process and their perception about the UI of the platform.

At first glance they said that the User Interface design was good, the colors were chosen right, but they did not have a pleasant experience while using the platform.

Some negative remarks that the students mentioned:

  • Platform flow is not user friendly and intuitive;

  • Links are not intuitive and hardly visible;

  • The header on the pages was too big;

  • Social media accounts login is hardly visible.

3.3 User Observation Session

User observations implies the experts observing users performing some tasks they had been given to test how they interact with the user interface. The experts will be taking notes about “user performance and timing sequences of actions” [17].

This testing session also took place at the Multimedia Center inside the university, with 5 master students.

For the observation sessions, the authors prepared in advance the required materials for proper usability testing: video recording and confidentiality agreement, prequestionnaire, postquestionnaire, list of participant tasks, facilitator guideline, observer guideline, tables for registering participant comments, participant notes, times, steps and errors sheets.

On this testing session the participants, master students, had to complete the following tasks:

  1. 1.

    Access the Active Self-regulated learning MOOC that is related to your actual knowledge. The users made several errors during the process such as choosing the wrong course and thinking that they finished the task without actually enrolling in the course. Some major issues were that the pre-assessment activity was not checked automatically, the course link is not clear and the results are not complete for every grade, representing the knowledge level.

  2. 2.

    Change your profile picture. The users did not had any issues in completing this task and all of them completed it without making errors.

  3. 3.

    Run partially the activities on the chosen course and complete the final test. The users had several issues trying to find out what course level they should pick because the grades of the pre-assessment did not offer them any information about the course level they should pick. Besides that, some of the final test’s answers were wrong. Also, an issue remarked by the users was that the videos weren’t integrated properly and they had some issues trying to play them. Another negative remark was that the links were really hard to observe, for example the course link was hard to find and the final test was also hardly visible.

The participants’ activity was recorded with a mirrorless Panasonic Lumix GH4 video camera placed at an appropriate height above the desk, so that the participant would still feel comfortable. The recording was projected in real-time on a wall, behind the user, so the observers could follow the participants’ activity on the laptop (Fig. 4). Another video camera, this time a DSLR Canon 6D, paired with a Sennheiser microphone, recorded the mimics of the participant face and what they said during the process. The facilitator kept encouraging the participant to think aloud whenever possible. Both recordings were later correlated with the observers’ notes.

Fig. 4.
figure 4

The setup of the user observation method

3.4 Error Testing Method

An error testing method involves that the participants try to perform some actions on the platform that’s being tested and, thus, discover the errors that show up in the process.

For this testing session the users, 98 master students, had to enroll into one MOOC course and complete all the activities associated with it. They reported the errors that they experienced using an online form.

The students identified two major categories of errors: issues related to the platform’s functionality and issues regarding the content of the courses.

The most important errors where the following:

  • After completing some of the courses, the students did not received the badges.

  • There were major issues regarding the quizzes they had to complete at the end of the courses because, on some of them, there were missing questions and, without those questions, they could not complete the quiz.

  • Some of them said that they could not check their progress because the progress bar did not update.

  • On some courses, they could not post anything in the forum section.

  • The students also had problems when they tried to upload a file in some of the courses’ sections.

3.5 Questionnaire Method

Using the questionnaire method, experts collect data using a survey that can have both open and closed questions [17].

The authors prepared for this session a questionnaire for the students that are enrolled in the Communication, Public Relations and Digital Media Master so they can express their opinion about the experience they had with the platform.

There was a number of 17 students that completed it and overall they appreciated the platform, even though they also said that it needed to be improved. They also had to realize a written report about their interaction with the platform. The survey’s results are presented below.

When the students were asked about the quality and the quantity of the activities that they had to explore, most of them appreciated that the activities had a high quality (Fig. 5) and the quantity was just perfect (Fig. 6).

Fig. 5.
figure 5

Student answers regarding the quality of the MOOC activities

Fig. 6.
figure 6

Student answers regarding the quantity of the MOOC activities

When they were asked how they would compare the MOOCs from OpenVM platform with the faculty courses, more than half of them said that they consider the MOOCs more interesting (Fig. 7).

Fig. 7.
figure 7

Student answers regarding OpenVM MOOCs vs faculty courses

The students also had to say what they liked and what they disliked most about the platform and the courses. They said that they liked the fact that the courses are free and can be accessed anytime, anywhere, from any device connected to the Internet. They also thought that the video materials were good. What they did not like about the courses was that they did not have the chance to communicate with their colleagues during this courses and also the fact that they had issues when accessing the quizzes.

3.6 E-learning Experts’ Evaluation

The eLearning experts who evaluated the platform helped reveal some general “neuralgic points” of the MOOCs. Firstly, they pointed that the students need to better understand what the role of the pre-assessment test is.

The experts noticed that some courses have too many questions, some have extremely complicated questions, and some have non-sense or duplicate questions. Another issue related to the course flow is that each course should have a clear pathway of content and activities; at the moment of the evaluation, everything was put there together and it was difficult to understand by students.

Some other issues that they found were that the videos integrated using H5P do not work as they are supposed to and their suggestion was that the videos should be embedded in an alternative technical manner.

Besides, the experts added that the insertion of images in the main page of a course is not beneficial, as students need to scroll down too much until they reach the content, and from the mobile phone this is even more frustrating. Another issue might be that the courses which allow students to self-check their progress will easily offer fake badges to students.

Finally, they concluded saying that the students need to have a better understanding of what they have to do in each course. Also, that they should understand that they can receive a badge and what they need to do in order to receive it.

3.7 Identified Usability Problems and Their Severity Ratings

The authors used severity ratings to prioritize the issues that were affecting the most the users’ experience. According to [18] we consider three factors when analyzing a usability problem: frequency (“is it common or rare?”), impact (“will it be easy or difficult for the users to overcome?”) and persistence (“is it a one-time problem that users can overcome once they know about it or will users repeatedly be bothered by the problem?”).

Jakob Nielsen also proposed a four-step scale to rate the severity of usability problems, as it follows [18]:

  • 0 = The problem is not an usability issue.

  • 1 = “Cosmetic problem only”: it doesn’t have to be fixed unless extra time is available on project.

  • 2 = “Minor usability problem”: as the issue is not severe it should be solved only after major problems are solved.

  • 3 = “Major usability problem”: fixing this kind of issue should be a high priority because it affects the user experience.

  • 4 = “Usability catastrophe”: the product should not be released until this kind of issue is resolved (Table 1).

    Table 1. List of identified usability problems and their severity ratings

4 Results and Recommendations

The usability testing of the Virtual Mobility Learning Hub had the role of helping the authors find out if the Moodle LMS can sustain fully open, online, not tutorized courses, what are the experiences of the students in the platform and if the MOOCs are ready to become available for the HEI market.

Most of the participants consider that MOOCs could easily replace some of the faculty courses and it’s easier for them to learn from MOOCs because they can have access anytime, using a device that has Internet connection.

On the other hand, the students were not so pleased of the experience they had using the platform because they encountered many issues that had a negative impact on their journey trying to get the badges for the courses. Some of them did not receive the badges even if they finished the course, and other students could not finish the courses because some of the quizzes were not implemented right and many of the “correct” answers were in fact wrong.

The participants proposed some improvements for both platform and courses. They believe that the experience of using the platform would improve if the videos and PDF documents would be integrated better because, at the moment of the evaluation, their implementation was defective. Also, the links should be more readable, visible and clear, and the quizzes should be revised and corrected, especially the checkboxes functionality. Another important aspect they mentioned is about the tasks and activities of each course, because they need to be revised and displayed correctly. The participants believe that for some of the courses, the structure should be modified in order to be more user friendly because they had some issues understanding exactly what they are supposed to do.

5 Conclusions

This paper reports on a usability evaluation of a MOOC platform, namely the Virtual Mobility Learning Hub, which is an innovative multilingual ICT-based environment to support virtual mobilities between universities. A mix of usability evaluation methods was used, namely focus groups, user observation sessions, error testing, surveys and expert reviews. A number of 139 participants took part in the study, most of them students enrolled in Master Studies in Communication and various technical fields, and some eLearning Experts. The paper reported on each usability method used and how, when and where it was applied, concluding each chapter with the usability problems that were identified.

At the end, we summarized the major categories of usability problems that came out of the process. Some problems pertained to the platform itself (social media accounts login is hardly visible, platform flow is not user friendly and intuitive, links are not intuitive and hardly visible, the integration of YouTube videos and PDF documents is defective etc.) while others pertained to the actual creation and formatting of the learning materials (not everything is written in English/ translated, some test answers are wrong, the courses’ names are confusing and not specific, the tasks and activities weren’t displayed correctly etc.).

A lot of suggestions for improvements were derived from the applied usability methods and some suggestions have already started to be implemented.

The usability evaluation allowed us to answer the three main research questions as follows:

  • A1. The evaluation showed that, indeed, a Moodle-based Learning Management System has all the functionalities required and offers the right user experience for sustaining fully open, online, not tutorized courses. However, this also depends on the actual content of the courses and how the teachers set up the learning environment.

  • A2. The students generally reported that their experience in the VMLH was a good one. They rated the courses as better than their faculty ones and they found the VMLH courses to have the right amount of activities and that these activities are of good quality. However, they often stumbled upon small to medium annoyances, such as hidden social media login, page headers too big, defective integration of some multimedia learning materials etc.

  • A3. The OpenVM MOOCs are not error-free and many - thought small - improvements need to be done in order for the courses to be made available to the HEI market. Most of the improvements are in the area of the content of the courses, so the tutors should be in charge with implementing them.

The major contribution of the paper is in using a mix of 5 usability evaluation methods, with a large group of participants (139 persons), in an extended period of time (8 months), to derive usability issues pertaining to a MOOC platform. Also, the study shows how this mixed usability testing can be done in a university environment, with a practical outcome but also with a pedagogical purpose. The improved version of the learning hub is now used by students and professors in 5 universities.