Introduction

Massive open online courses (MOOCs) have rapidly increased in number and expanded the landscape of higher education. The cumulative growth of MOOCs reached over 9,400 courses, with an estimated total number of 81 million people signed up for at least one MOOC as of 2017 (Shah 2017). Although MOOCs were initially conceived as free educational opportunities for everyone and a potential means to democratize higher education (Dillahunt et al. 2014), MOOCs have evolved beyond the concept of open education by universities. After less than a decade, MOOCs have become different things to different stakeholders. Despite its original mission of democratizing education, many learners of MOOCs, in fact, already have a college education and employment, and do not have significant barriers when it comes to the affordability of higher education (Christensen et al. 2013; Dillahunt et al. 2014; Rohs and Ganz 2015). Their highest motivation and expectation for taking MOOCs is to obtain professional and career benefits (Egloffstein and Ifenthaler 2017; Zhenghao et al. 2015). Higher education institutions have generated a number of new business models using MOOCs (Burd et al. 2015). MOOCs’ providers, such as Coursera, edX and Udacity, have been expanding their services in diverse ways for monetization (e.g., concentrations, nano-degrees, micromasters) (Waters 2015, August 5). Corporations have started to give serious consideration to MOOCs as a means for their employee training (Konrad 2017).

While MOOCs have brought great potential and interest for the various stakeholders involved, the quality of the learning experiences in MOOCs is still debatable (Margaryan et al. 2015; Toven-Lindsey et al. 2015). For a course in any learning environment and format, the instructional quality comes from pedagogical considerations. Therefore, the purpose of this study is to review the pedagogical design of current MOOCs using e-learning principles. In particular, the study reviewed MOOCs in computer science (CS). We choose to focus on CS courses for two reasons: 1) it is the field in which the largest number of MOOCs are offered, due to increased interest and demands on the workforce in CS fields (National Academies of Sciences, Engineering, and Medicine 2018); and 2) providing students with authentic job-related, problem-solving tasks that require collaborative work is critical in this field (Grover and Pea 2013; Robins et al. 2003; Wing 2011). E-learning design principles by Clark and Mayer (2011) are evidence-based instructional design guidelines applicable to all forms of e-learning. Using these principles as a theoretical framework, the authors of the present study were guided by the following questions:

  1. 1.

    To what extent are e-learning design principles applied in MOOCs?

  2. 2.

    How have e-learning design principles been used differently by different providers of MOOCs?

  3. 3.

    To what extent has the application of e-learning design principles been used differently depending on the difficulty level of MOOCs?

  4. 4.

    How have e-learning design guidelines been applied differently in CS MOOCs by different providers and by the difficulty levels of the courses?

Literature review

MOOCs

Distance education has been an important means of enabling the professional development and lifelong learning of adults. Reflecting the history of distance education, the explosive growth of MOOCs in a short period of time is certainly notable. Despite its short history, its “openness” and “massiveness” as an accompanying consequence has already brought about many changes to higher education, traditional online education, open education, professional development, career development, and corporate training. Overall, open access to MOOCs offered by prestigious higher education institutions has increased its visibility and the acceptance of online learning by the public.

For individuals, MOOCs offer a plethora of new options for pursuing their personal and professional goals by updating their skills and knowledge in diverse topics and disciplines from universities (Castaño-Muñoz et al. 2017). For European participants, MOOCs were considered an important means of professional development for unemployed participants to reenter the job market as well as participants who lacked employer support for their professional development (Castaño-Muñoz et al. 2017). Another study on MOOC learners also reports that high self-regulators among the MOOC learners, compared to low self-regulators, shared that their primary motivation in taking MOOCs was to build specific skills sets and expertise relevant to their professional roles and career development (Littlejohn et al. 2016). However, these learning intentions (e.g., tangible job and career benefits, knowledge gain for work tasks) of MOOCs’ participants might not align well with their actual learning experiences (e.g., reading and watching the content) in the current MOOCs (Milligan and Littlejohn 2014). MOOCs have great potential to support adults’ professional learning and development by providing personalized, self-regulated, and socially networked technology-enabled learning environments; however, such successful innovation comes with good design decisions and affordability.

Regarding the changes MOOCs have brought to higher education institutions, unlike much traditional online education (e.g., online degree programs) where the course and program development effort and initiatives tend to be at the individual faculty or program level, MOOCs initiatives are often determined by higher-level administration because MOOCs have become an important means for branding the value of institutions (Howarth et al. 2017). Accordingly, the design of MOOCs is often facilitated by an institutionally standardized instructional design approach. While centralized resources can help the quality of the courses to some extent, the large scale and heterogeneity of participants in MOOCs present their own unique challenges that instructors need to consider in designing this new type of online course. For example, MOOCs’ instructors identified learner engagement, learner interaction and limited assessment methods as major pedagogical challenges, given the large enrollment and limitations associated with the platform affordance (Zhu et al. 2018a).

In a review of 183 empirical studies published between 2013 and 2015 regarding MOOCs, 46.4% research was focused on topics related to the course design (Veletsianos and Shepherdson 2016). In a more recent study reviewing 146 empirical studies of MOOCs published between 2014 and 2016, Zhu et al. (2018b) reported 32.9% of such publications were about design-focused studies. The course design-focused studies address various aspects of the design, development and implementation of topics such as digital badges, assessments, tools for social interaction, development of a new MOOC, and instructional media; yet these studies tend to focus on a single design feature and study a relationship between the design feature and students’ learning, behavior, satisfaction and perception. Recent empirical efforts on the design of MOOCs have been more diversified. For example, by analyzing 4466 participants’ comments on ten highly rated MOOCs, Hew (2018) reported five course design factors most frequently mentioned to engage MOOC participants: Course resources, peer interaction, instructor availability and passion, active learning and problem-oriented learning. A few studies have reviewed the application of instructional design principles to the design of MOOCs (Lowenthal and Hodges 2015; Margaryan et al. 2015; Watson et al. 2017); nevertheless, the overall knowledge regarding the extent to which research-proven instructional design principles have been applied to the design of MOOCs has remained limited.

Evaluation of MOOC quality

The discussion about the quality of open online education has been of interest to many stakeholders, including educational providers and consumers (Jansen et al. 2017; Stracke 2019). Given such interest, different entities or organizations have developed a number of quality standard models or guidelines for quality assurance of e-learning (Ossiannilsson et al. 2015). There are also quality assurance models specifically developed for MOOCs. They include Openup Ed (Rosewell and Jansen 2014) initiated by the European Association of Distance Teaching Universities and the European Foundation for Quality in e-Learning (EFQUEL) (Creelman et al. 2014). These available quality assurance models of e-learning or MOOCs are intended mainly for certification, accreditation, benchmarking, or labelling as a frame of reference (Ossiannilsson et al. 2015). As such, they are concerned with quality principles and practices not only at an individual course level (micro level) but also at an institutional and national/international level (macro level), such as institutional management and governance of online education programs. For example, the Openup Ed quality label includes 11 course-level benchmarks and 21 institutional-level benchmarks that cover 6 areas, including strategic management, curriculum design, course design (support), course delivery (management), staff support, and student support.

In contrast to these general, macro-level frameworks, a few researchers have taken closer look at the pedagogical or instructional design quality of MOOCs (e.g., Chukwuemeka et al. 2015; Yilmaz et al. 2017; Yoila and Chukwuemeka 2015). They have examined how consistent the design of MOOCs was with existing instructional design principles and models. For example, Margaryan et al. (2015) reviewed 76 MOOCs in various disciplines to examine whether Merrill’s (2002) principles of instruction were reflected in MOOCs. The study reported that most MOOCs implemented only a few of the established principles, suggesting poor instructional quality in general. Moreover, the recent study by Watson et al. (2017) also reviewed the instructional quality of MOOCs, applying the first principles of instruction. They evaluated 9 MOOCs that specifically targeted attitudinal change, using the same items from the instrument developed by Margaryan et al. (2015). In contrast to the results of Margaryan et al. (2015)’s study, their review of MOOCs in attitudinal learning found that the first principles of instruction were generally well-incorporated into the course design.

Similarly, Yilmaz et al. (2017) applied Chickering and Gamson (1987)’s seven principles of good practice to an evaluation of MOOCs. Those seven principles were initially developed to evaluate face-to-face education, but they also have been used in the evaluation of online courses (Hathaway 2013; Tirrell and Quick 2012). The seven principles consist of these elements: (1) interaction between students and teacher, (2) reciprocity and cooperation among students, (3) active learning, (4) feedback, (5) time on task, (6) high expectations, and (7) differentiation. Yilmaz et al. (2017) developed a measurement tool based on these principles and reviewed six MOOCs offered in Udemy. The study reported that the reviewed courses generally met the established criteria for online learning environments.

Instead of using the instructional design principles originally developed for traditional face-to-face instruction, Lowenthal and Hodges (2015) adopted the Quality Matters (QM) (2014) framework, a quality assurance model for evaluating online course design in the United States, to examine whether MOOCs could meet the same quality standards as traditional online courses. The QM framework includes 8 areas of online course quality: (1) course overview, (2) learning objectives, (3) assessment, (4) instructional materials, (5) learner interaction, (6) course technology, (7) learner support, and (8) accessibility. Point values are assigned to these areas based on their essentiality to online course quality. Using QM, Lowenthal and Hodges (2015) reviewed 6 STEM MOOCs, two from each of three MOOC providers (Coursera, edX, and Udacity) and found that none of the six reviewed courses achieved a passing score of 85% although two courses had relatively high enough scores to be still considered high quality online courses. Although QM is a quality assurance framework to be used for online courses, it has a tendency to heavily emphasize basic aspects of course design instead of important pedagogical strategies for promoting students’ interaction, engagement and collaborative learning (e.g., more point values are assigned to learning objectives than to learner interactions) (Lowenthal and Hodges 2015). They cautioned that online courses could score high on QM simply because they met standards for the basic course elements.

Besides, the distinctive features of e-learning courses often include limited synchronous interactions and the belief that effective instruction depends heavily on the presentation of content information in multimedia formats. E-learning courses widely use multimedia components with relevant instructional methods in order to help individuals achieve their personal learning goals and the job-applicable knowledge and skills. Thus, the effective design of multimedia learning materials, learning activities, and instructional methods must be critical in such learning environments. Clark and Mayer (2011) introduced e-learning principles that provide guidelines for designing e-learning instruction in a way that does not interfere with but supports natural human learning processes. However, the existing evaluation of MOOCs’ quality has rarely considered if the course or instruction is compatible with the principles of e-learning design.

E-learning design principles

E-learning can be broadly defined as “instruction delivered via a digital device” (Clark and Mayer 2011, p. 8). Due to the distinctive features of e-learning environments, such as the lack of face-to-face interaction and heavy reliance on multimedia, the issue of how to use multimedia in a way that supports cognitive learning processes is a challenge of great importance in e-courses (Clark and Mayer 2011). The e-learning design principles are developed based on the principles of multimedia learning proposed by Mayer and his colleagues. The principles of multimedia learning refer to guidelines for effectively presenting data or information in a multimedia format, which have been empirically validated by over 25 years of research (Mayer 2009). Clark and Mayer (2011) extended these evidence-based principles and organized them into a set of principles for e-learning to help in the effective design and use of multimedia information with appropriate instructional strategies in e-courses.

Based on the e-learning principles, Clark and Mayer (2011) further developed a set of 56 design guidelines for creating different e-learning design aspects, such as the use of visual only mode, use of audio and visual modes, navigational options, collaborative learning, building problem-solving skills, and teaching job tasks. For example, Table 1 presents the guidelines for e-learning designed to teach job tasks alongside the relevant e-learning principles. The e-learning guidelines provide specific, practical suggestions for e-learning design in different learning contexts and conditions. The e-learning principles and guidelines are applicable to all types of e-learning, including courses simply for providing information and courses focusing on job-related skill development, which are relevant to the context of MOOCs. Given that MOOCs are a type of e-learning that is heavily dependent on the use of multimedia, these e-learning principles can shed light on addressing the instructional quality of MOOCs.

Table 1 Guidelines for e-learning designed to teach job tasks (Clark and Mayer 2011, pp. 406–407)

Methods

We selected 40 CS MOOCs in two platforms: 20 from Coursera and 20 from edX. Coursera and edX are the two biggest U.S.-based MOOC providers, offering more than 3500 MOOCs, which accounted for over 50% of the courses in the MOOC market (Sanchez-Gordon and Luján-Mora 2018; Wexler 2015, October 19). By selecting the two platforms, the MOOCs reviewed in this study are all xMOOCs, which are a more recent form of MOOCs following the more formal and traditional structure of a higher education course. xMOOCs are relatively short in duration, ranging from 4 to 8 weeks, and target specific topics to be learned during the short period of time. Among the selected 20 courses per platform, 10 introductory-level and 10 intermediate-level courses were reviewed in order to compare the design differences between the two difficulty levels of the course. Thus, in total, 20 introductory-level and 20 intermediate-level courses were reviewed.

To measure the pedagogical design quality of each MOOC, we developed an instrument adopting Clark and Mayer’s (2011) e-learning guidelines and principles. The instrument included 56 items corresponding to 56 guidelines of e-learning principles. Following Margaryan et al.’s (2015) approach, we used a 4-point Likert-type scale to assess the degree to which each e-learning guideline was reflected in the course:

  • 0—none (the guideline was not applied at all.)

  • 1—to some extent (serious omissions or problems exist; the guideline was applied in fewer than 50% of the learning activities and components reviewed.),

  • 2—to a large extent (it is generally satisfactory, but there are some omissions and problems; the guideline was applied in between 51 and 80% of the learning activities and components reviewed.)

  • 3—to a very large extent (it is excellent; the guideline was applied in between 81 and 100% of the learning activities and components reviewed.)

In order to demonstrate the validity of the instrument, three researchers of the study assessed its content validity, adopting an expert review approach suggested in previous research (e.g., Davis 1992; Lawshe 1975; Waltz et al. 2010). All three researchers hold a doctoral degree in instructional design and technology, and each had 5–15 years of experience as an instructional designer. Also, they had expertise in e-learning principles and previous experience in designing, developing, and teaching a MOOC. Each of the researchers independently reviewed each item and rated its relevance to the quality of a MOOC on a 4-point scale (Davis 1992). Out of 56 items, 50 items reached a content validity ratio of 1, meaning that all three researchers rated the items as relevant (Lawshe 1975). A content validity ratio of the remaining 6 items ranged from − 1.00 to − .33, indicating two or more researchers rated the item as irrelevant. All of these 6 items were from the e-learning guidelines for developing games and simulations. By Davis’ (1992) method, an overall content validity index was calculated to be .89.

Following the content validity results, we decided to exclude the six items with low content validity ratio values from the instrument. Given the descriptions on the six guidelines for developing games and simulations, the three researchers agreed that they were not applicable to MOOC contexts. For example, guidelines for games and simulations assumed the learning environments allow players to follow rules and have controls of the experiences within the game. Furthermore, none of the 40 selected MOOCs reviewed used games or simulations.

Thus, we used the remaining 50 items to evaluate the design quality of a MOOC. First, two researchers independently evaluated a MOOC by assigning a score to each item in the instrument after reviewing each course. The researchers evaluated 20 MOOCs first. The inter-rater reliability between the two researchers was achieved at a Cohen’s kappa coefficient of .85, which indicates a very good strength of agreement (Altman 1991; Cohen 1968). The scoring discrepancy was discussed until 100% agreement was established. Once such a good strength of agreement on the scoring was confirmed, each researcher reviewed 10 courses individually from the remaining 20 courses.

Data analyses

We first categorized a total of 50 items into the corresponding e-learning principles and the e-learning guideline categories (see Table 2). Then, we calculated the average scores of each principle and guideline and examined descriptive statistics to understand the extent to which different principles and guidelines have been applied to MOOCs. Also, we performed two sets of one-way ANOVAs to examine differences between the two platforms and between the two course difficulty levels in the extent to which the e-learning principles were reflected in MOOCs. We excluded two principles with a single item (job validity and redundancy principles) from the analyses, resulting in a total of 11 principles as the dependent variables. Other two sets of ANOVAs were performed to examine differences in the extent to which the e-learning guidelines were applied in MOOCs between the two platforms and between the two difficulty levels.

Table 2 (a) e-Learning principles and the number of items (Clark and Mayer 2011). (b) e-Learning design guidelines and sample items (Clark and Mayer 2011)

Findings

Overall, we found that e-learning principles were applied to some degree (M = 1.51, SD = .57) to design the reviewed courses. However, the extent of the application was significantly different from principle to principle depending on the platform and the course difficulty level. Two principles (Personalization and Practice) were more greatly applied in edX than Coursera. We also observed that the extent of application of two principles (i.e., Practice and Feedback) was greater in advanced courses than in introductory-level courses, as were the e-learning guidelines to enhance abilities and skills for job tasks.

Application of E-learning principles

Table 3 represents the means and standard deviations of e-learning principle and guideline scores. For each principle and guideline, a range of possible scores was 0–3. Among the 11 e-learning principles, the segmentation principle received the highest score (M = 2.7, SD = .53), close to the maximum score of 3. The next high-score principles were the pretraining (M = 1.82, SD = .89) and contiguity (M = 1.74, SD = .63) principles whose mean scores were close to 2. For the principles of learner control, modality, coherence, multimedia, and personalization, the mean scores ranged from 1.34 to 1.53, indicating these principles were applied in the reviewed MOOCs to a lesser degree on average. The rest of the three principles, practice (M = .90, SD = .78), worked examples (M = .88, SD = .42), and feedback (M = .69, SD = .70), scored lower than 1, which suggests that these principles were rarely applied in the reviewed MOOCs.

Table 3 Descriptive statistics of the scores of the e-learning principles and guidelines

Application of E-learning principles to MOOCs in two different platforms

ANOVA analyses showed significant differences between the two platforms in two of the e-learning principles: the personalization principle, F(1, 38) = 5.70, p < .05, and the practice principle, F(1, 38) = 13.96, p < .01. These two principles were better applied in MOOCs of edX than those of Coursera (see Table 4).

Table 4 Tests of between-subjects effects depending on the platform

Application of E-learning principles to MOOCS at two difficulty levels

ANOVA analyses showed significant differences between MOOCs at the introductory and intermediate levels in two of the e-learning principles: practice principle, F(1, 38) = 5.92, p < .05, and feedback principle, F(1, 38) = 16.01, p < .001. These two principles are more largely applied in the intermediate courses than in introductory level courses (Table 5).

Table 5 Tests of between-subjects effects and descriptive statistics of the different application of e-learning principles depending on the course difficulty level

Application of E-learning guidelines to MOOCs in two different platforms

Another set of ANOVAs was performed to compare the application of e-learning guidelines between the two MOOC platforms. However, no significant difference was found in any of the e-learning guidelines between the two MOOC platforms.

Application of E-learning guidelines to MOOCs at two difficulty levels

We found a significant difference between the two difficulty levels only in the application of e-learning guidelines related to job tasks, F(1, 38) = 5.90, p < .05. Intermediate-level courses were more likely to incorporate guidelines for building job task skills than introductory-level courses (Table 6).

Table 6 Tests of between-subjects effects and descriptive statistics of the different application of e-learning guidelines to enhance job task skills depending on the course difficulty level

Discussion

This study investigated the extent to which Clark and Mayer (2011)’s e-learning principles have been applied to the introductory and intermediate level CS courses in the two largest xMOOC platforms: Coursera and edX. These empirically validated e-learning principles deal with various design areas such as multimedia learning elements, teaching job tasks, collaborative activities, and problem-solving skills.

First, with regard to the areas and extent to which e-learning principles are applied in the current CS MOOCs, while some variations exist as reported in the finding, in general, a relatively low application of the principles was observed in the design of the reviewed MOOCs. The findings of this study were aligned with previous research on the design and instructional quality of MOOCs. For example, only a few of the principles in this study applied to a very large extent in the reviewed MOOCs were related to organization (e.g., modules and lessons) and presentation of content (e.g., well-segmented videos). Likewise, Margaryan et al. (2015) have claimed that almost all of the xMOOCs that they have reviewed scored highly on the organization and presentation of course materials. However, the majority of xMOOCs that they reviewed used neither learning activities centered on divergent problems pertinent to real-world application nor opportunities for applying new knowledge and skills, as applications of the guidelines for job tasks as well as the principles such as practice and worked example were particularly low. In addition, overall the xMOOCs reviewed by Margaryan’s team tended to lack meaningful interaction and feedback, which are important to the effective learning experiences of adult learners. Similarly, the reviewed MOOCs in this study showed a low application of feedback principle and guidelines for collaborative learning.

Considering the current demand and trends (e.g., professional development, career development, diverse types of credentials) driving the overall growth of MOOCs as well as the continued high interest in CS MOOCs, it is crucial to design MOOCs conducive to the expertise building of participants. Learning programming involves various processes, such as acquiring knowledge of syntax and algorithms as well as developing problem-solving skills (Linn and Dalbey 1989). However, not all courses have provided sufficient opportunities to acquire learning content through different types and levels of practice with varied degrees of scaffolding. While most courses used visual tools to teach coding skills, there was limited opportunity for learners to engage in practice and problem-solving as well as to receive guidance and feedback associated with such a problem-solving process. Considering the unique strength of CS regarding its technological capacity for innovations (e.g., auto-graded assessments), we believe that CS MOOCs can be improved by including more opportunities for teaching job tasks and developing problem-solving skills. In doing so, the associated principles and guidelines by Clark and Mayer (2011) used in this study could be useful in designing and developing such learning components.

Also, although learners from diverse backgrounds, including different prior knowledge levels, ages, and online learning experiences, register for MOOCs, courses in MOOCs rarely consider the needs of the individual learner. It is challenging to engage a large group of diverse learners and make a personalized learning experience to suit their learning needs (Beaven et al. 2014; Watson et al. 2016, 2017). To support individually different online learners, Clark and Mayer (2011) suggest providing various types of and options for learner control, as learners are “heterogeneous regarding background and/or instructional needs and the cost to produce tests and decision logic gives a return on investment (p. 408)”. For instance, regarding navigational options, less self-regulated learners or learners with a lower prior knowledge level may need different levels of support from the course or instructors. However, we found that none of the reviewed courses provided adaptive guidance or diagnostic tests or pre-tests to accommodate various learning needs. Considering the current technological advancement in learner analytics, adaptive guidance, and personalized learning environments, scholarly efforts in these areas can help MOOCs more fully address the diversified needs and capacities of learners.

Furthermore, the CS field tends to require effective communication skills and collaborative working abilities as programming in a real-world context often occurs as team-based projects (Gruba and Sondergaard 2001; Zarb and Hughes 2015). In response to a high demand for collaborative working abilities, adapting team-based work has been recommended in CS courses (Chu and Hwang 2010; Gonzalez 2006). For example, pair-programming, where two individuals work side-by-side, designing, implementing, and testing a programming solution in CS curriculum, has been adapted in CS education (Williams and Kessler 2002). However, in the reviewed MOOCs, course assignments were mostly designed for individual work and the assessments relied on self-check with some auto-generated feedback or peer-grading. Interaction among learners were also limited to mostly optional and unstructured asynchronous discussion activities embedded in the platforms. While limited learner-to learner interaction is a common phenomenon in xMOOCs given its nature and enrollment scale (Margaryan et al. 2015; Tawfik et al. 2017), there is an increasing volume of studies on social engagement and small group activities in MOOCs (e.g., Barak et al. 2016; Zhang et al. 2016). For example, Barak et al. (2016) reported the importance of social engagement in motivation gain and suggested that posting two or three messages per week can have an important impact on participants’ motivation and working with 4–5 people in a small study group can be a useful means of improving motivation. Although collaborative learning itself would not be a primary learning goal of CS MOOCs, how to bring about pedagogical innovations through social and collaborative learning using more structured peer interaction in MOOCs can be an important next step for consideration.

Second, our findings indicate that certain design principles were used more largely in the MOOCs provided in one platform than the other. Due to the unique nature of MOOCs (e.g., heterogeneity of learners, massive numbers of registrants, media dependence), instructors experience design challenges related to assessments, engagement, and interaction regardless of their prior experiences with online teaching (Zhu et al. 2018a). Considering the short history of MOOCs and instructors’ relative unfamiliarity with the learning management system (LMS) of MOOCs, it is not surprising that about 60% of MOOC instructors seek out help from the platform when encountering design challenges. In other words, the design decisions that instructors make for the design of their MOOCs can be limited to or even driven by technological affordances offered in the MOOC platforms. Although MOOC providers give general design guidelines and assistance as well as make efforts to improve their own platforms, to what extent the instructors and instructional designers from universities can implement their pedagogical innovations in those platforms is unknown. For example, in the current CS MOOCs, a video format is the most commonly-used method to deliver content knowledge. Online learners’ satisfaction with video lectures can predict their perceived learning experience and their sense of engagement with content (Scagnoli et al. 2019). We observed various design formats and lengths of videos. One general guideline by a recent study suggested the most effective length of the video lecture in MOOCs is 7–8 min (Guo et al. 2014). Chen and Wu (2015)’s study compared the effect of three different online video lecture types and found that the lecture capture and picture-in-picture types were more effective than the voice-over type in enhancing learning performance. While empirical efforts to study the effect and design of video lectures have increased, in general, there are limited guidelines regarding how to design and use video lectures in effective and engaging ways for the participants of MOOCs. In addition, given the limited interaction among learners in xMOOCs, incorporating in-video elements and taking advantage of video analytics can potentially facilitate more content discussion and socially networked learning processes in MOOCs. In summary, despite the volumes of knowledge that we have accumulated on pedagogical innovations and design principles in online education, little research and effort has been invested in their applications to MOOCs. Of greater concern is that as MOOCs become important activities of higher education institutions, there is a risk that the way MOOCs are designed can be seen as a prototypical model of online education. We hope that our findings may help researchers and practitioners who will potentially develop MOOCs to apply and follow more empirically-proven design principles and guidelines.

Third, the findings indicate that the intermediate-level courses have used the guidelines to teach job tasks as well as help learners build problem-solving skills more than the introductory-level courses. Although the overall application of design principles is limited, this particular result is promising. That is, we can make more design efforts toward teaching job tasks and building problem-solving skills in other levels of CS MOOCs within the technological affordances of both platforms. We have observed that the reviewed CS MOOCs were often part of specializations. Including more diverse job-task and problem-solving opportunities beyond video or text-based demonstrations in the introductory-level courses, thus facilitating more engagement and career relevance, might attract and motivate learners to continue their learning in the course and perhaps in the other more advanced-level courses in the specialization.

Despite their rapid growth, MOOCs are still new to many people. A variety of perspectives on how MOOCs should be defined and treated exist. As we are learning about MOOCs in terms of their characteristics, boundaries and roles, MOOCs themselves are still evolving. To some people, a MOOC is an interactive online course conducive to professional and career development, but to others it is an online open repository of self-paced video resources (Spector 2014). We think that it is important to understand the ways in which stakeholders understand and define MOOCs because their perceptions of MOOCs can affect how MOOCs should be designed and offered. In other words, the design differences might be due to different perceptions from stakeholders who have design decision power over the MOOCs. Further studies can focus on the perspectives of the instructors and the instructional designers on the instructional quality and design of MOOCS and, by doing so, can shed light on the future direction of pedagogical innovations in MOOCs.

As we complete our discussion on the findings of this study, we would like to note a few limitations with regard to the study findings. First, we purposefully reviewed 40 computer science MOOCs from the two largest providers. Thus, our findings may not be representative of MOOCs from all current MOOC providers. Second, although the instrument used to review MOOCs in the study is based on the prolonged scholarly effort and strong theoretical foundation developed by Clark and Mayer (2011), these principles were intended to be used as guidelines for the development of all forms of e-learning materials and courses, not specifically for MOOCs. More studies that evaluate the quality of MOOCs using e-learning principles can be beneficial to further refine and validate the instrument, including an expansion of the principles consisting of single items (e.g., job validity and redundancy principles). Finally, we cannot generalize our findings from this CS MOOCs review to MOOCs in other fields (e.g., social science), learning domains (e.g., MOOCs targeting attitudinal learning, Watson et al. 2017) or other types of CS MOOCs (e.g., cMOOCs).

Conclusion

The study presented an initial understanding of the application of Clark and Mayer (2011)’s e-learning principles by exploring the pedagogical design of CS MOOCs. The study also carried out systematic comparisons of the quality of courses in different platforms and at different levels of course difficulty. Pedagogical design is one of the major factors in course quality. Course quality affects the perceived usefulness of the course, and such a perception influences learners’ continued engagement and participation in MOOCs (Yang et al. 2017). Although principles regarding the organization and presentation of the course content were well reflected in the reviewed MOOCs, many of the other principles (e.g., helping learners to experience diverse job tasks and practice domain-specific problem-solving skills) were applied to a limited degree to the MOOCs of both platforms regardless of the course level.

This study extends the current literature on the instructional quality of MOOCs by examining the application of e-learning principles to MOOCs. By revealing the areas for improvement, the study also informs the potential direction of pedagogical innovations in MOOCs. Future studies could explore reviewing additional MOOCs from different subjects, providers, and difficulty levels using the e-learning principles and then compare any similarities and differences in the results. Those MOOCs with a distinguished design can be highlighted and benefit the design and improvement of other MOOCs if desired. A further study validating the principles as an evaluation instrument for MOOCs can be helpful for guiding the field. Lastly, it would also be valuable to study the impact of the instructional quality of MOOCs by examining the relationships between the application of e-learning principles and learning outcomes.