Free online courses, including Massively Open Online Courses (MOOCs), are an increasingly popular form of online instruction offering for large audiences. They typically offer access to live, video-recorded, or online text instruction with no associated fee, and often cover high-demand topics like introductory sciences or computer programming. Since the cost to offer a course to diverse audience scales only with the costs of internet bandwidth, institutions offering free online courses very seldom institute a limit on the number of online consumers of the course material. As a result, there are fewer barriers to entry for those who are interested in attending, especially compared to the high costs of official enrollment in a college-level course at the prestigious institutions which often host the courses. This openness has been argued to provide an excellent opportunity for increasing the inclusiveness of education in many content areas (MacLeod et al. 2015).

One often noted phenomenon, however, is the extremely low rate of persistence of users within these courses (Breslow et al. 2013; Gütl et al. 2014). Persistence is influenced by many factors. Among these predictors, MOOC researchers have tended to focus on demographics, courseware interactions, and reported intent for enrollment (Ho et al. 2014; Nesterko et al. 2014a, b; Reich 2014). Individuals’ role relationships to the content, e.g. as teachers versus as students, could play a role in determining persistence as well; teaching status has been examined as a predictor of MOOC performance (DeBoer et al. 2013), though not course persistence. In this study, we examine persistence in an online STEM course using two motivational frameworks drawn from the educational research literature—Achievement Goal Theory and Expectancy-Value Theory—which connect motivation and persistence in learning spaces. We further explored an emergent heterogeneity in motivation–persistence relationships between two different kinds of learners who make use of open courses: students (the typically conceptualized learner) and teachers (who use open courses as professional development for traditional instruction).

Background

Achievement goal theory

Achievement Goal Theory (AGT) originated in the 1980s with the primary distinction between learners with mastery goals and performance goals. Mastery goals are those in which the goal is “to develop one’s competence”. Performance goals, on the other hand, revolve around “demonstrating one’s competence by outperforming peers” (Senko et al. 2011, p. 26). This distinction was further refined through empirical work to add a second dimension of approach versus avoidance (Elliot 1999). Approach goals indicate a positive desire for increased competence (Mastery Approach, or MAP) or outperformance of others (Performance Approach, or PAP). Avoidance goals take the form of a double-negation: Performance Avoidance, or PAV, is the desire to avoid the appearance of incompetence. Mastery Avoidance is the theoretical desire to avoid failure-to-learn or loss of previously held skills, although there remains some debate about its proper characterization (Hulleman and Senko 2010).

Overall, learners who report high levels of Mastery Approach goals tend to display many characteristics and behaviors that are beneficial to learning, including increased persistence when facing difficulty (Elliot et al. 1999). Thus, although not previously connected to open course persistence, the connection of achievement goals to learner persistence more generally is well established. Learners who report high levels of Performance Avoidance goals tend toward unproductive behaviors, such as not seeking help when needed; this would presumably lead to lower persistence when faced with a frustrating situation. Empirically, the relationship between Mastery Approach goals and high persistence is well-documented, Performance Approach generally contributes to persistence (though not always learning outcomes), and the relationship between Performance Avoidance or Mastery Avoidance and persistence is less straightforward (Elliot et al. 1999; Hulleman and Senko 2010).

The latest research on Achievement Goal Theory has attempted to reconcile numerous subtle distinctions. For instance, if a learner wishes to appear competent, there are questions about whether it is the learner’s ultimate reason behind wanting to appear competent that matters, or whether it is sufficient that the learner wishes to appear competent (Elliot 1999; Senko 2016). There is some evidence to support both perspectives, and the best path forward may include additional distinctions within the theory. However, doing so comes with drawbacks to the theory’s parsimony. Such research is in its early stages, and its full theoretical implications remain unexplored (Senko 2016).

As our study’s objectives are to investigate the effects of participant motivation on persistence in MOOCs, we employ the slightly older but better understood three-goal model (mastery approach, performance approach, and performance avoidance), acknowledging that it comes with known limitations. In particular, the omission of mastery avoidance from the model mitigates test fatigue and confusion for younger learners (mastery avoidance items typically employ double-negation wording), but discounts adult learners’ wishes to avoid losing skills they formerly possessed. Notably, our instrument also measured appearance performance goals (e.g. “It is important to me that I look smart compared to other people studying programming”) and not normative performance goals (e.g. wanting to outperform peers), which are known to have somewhat different predictive effects (Elliot and McGregor 2001; Hulleman and Senko 2010).

Expectancy value theory

A second major theory connecting motivation to actions like persistence is Expectancy Value Theory (EVT), so named for the major constructs it proposes as the key variables in a decision-making process resembling a rational utility calculation (Ajzen and Fishbein 1977; Fishbein and Ajzen 1975; Lovett and Anderson 1996; Wigfield and Eccles 2000). Expectancy is a learner’s own expectations for success at a given endeavor. Value is the amount of value the learner would place upon a successful outcome. Cost is the amount of perceived “cost” against which the merits of an action are weighed. If, intuitively speaking, Expectancy times Value exceeds Cost, the learner will continue with a course of action (Wigfield and Cambria 2010), for example, persisting on a task or in a course.

As a construct, Expectancy is generally composed of an individual’s beliefs about his or her own level of ability and expectations of success. Value includes several component constructs such as how important the individual feels it is to do well on the task (attainment value); how much the individual enjoys doing the activity (intrinsic value); and how useful the individual thinks the consequences of the action, e.g. skills gained, will be (utility value). Interest in a topic is closely related to intrinsic value because accomplishing something that one is interested in is likely to provide an enjoyable experience (Deci and Ryan 1985; Wigfield and Cambria 2010). Notions of self-identity relate to attainment value by being part of the calculus for deciding how important it is to an individual to do well on a task; and expectancy, by contributing to estimates of personal ability (Eccles 2009). An individual’s expectancies and values regarding a particular task (e.g. completing a course activity) are determined by a combination of social influences and feedback from outcomes of previous actions (Wigfield and Cambria 2010). While long acknowledged as present, Cost has only recently become an area of serious investigation by researchers, and as such, is not included in our measures for this study (Flake et al. 2015).

Predictions about persistence in online courses

MOOC users drop out of online courses for a variety of reasons, including life changes, diminished career relevance, excessive academic load, insufficient academic preparation, lack of social or technical support, and violated expectations about the course format (Gütl et al. 2014). Some of these reasons—particularly academic and social support—overlap substantially with the types of issues that achievement motivation studies have addressed. For instance, persistence in the face of frustrating situations is correlated with learners possessing certain types of achievement goals. Expectancy Value Theory suggests cognitive constructs—expectations of success, practical valuation of the outcome, and others—that weigh against the “cost” of frustration to determine willingness to participate. These motivational factors have been shown to predict persistence in classroom contexts (Ainley et al. 2002; Kuh et al. 2008). However, open courses consist only of volunteers, and thus variation in achievement goals may not be the central predictor for learner persistence here.

In addition, free online course environments do differ somewhat from traditional classrooms in that they are commonly characterized by the presence of external role heterogeneity among learners. Past research has emphasized national, socio-economic, age, and prior knowledge heterogeneity (Breslow et al. 2013; DeBoer et al. 2013). Here we draw attention to another heterogeneity of interest: K-12 students and teachers may both be enrolled in a given course as learners. Indeed, a recent study of MOOC participants (Ho et al. 2015) found that teachers can make up a significant proportion of various MOOCs that were originally intended for traditional students, many actively teaching in the same topic area as the course. In many STEM areas, a significant number of teachers suffer from insufficient background knowledge in the topics they teach (Darling-Hammond et al. 2005). Indeed, teachers and non-teachers perform at similar levels in MOOCs (Deboer et al. 2013). Additionally, MOOC course-taking for professional development reasons is particularly common for individuals under financial or geographic constraints (Dillahunt et al. 2014), so the appearance of teachers as learners in MOOCs is perhaps not so surprising.

Nevertheless, the different external roles of teachers and non-teachers may dictate differences in the two groups’ relationship to the course content. Teachers, for instance, might feel compelled to learn the material more thoroughly so as not to appear unknowledgeable in front of their students (i.e., a form of Performance Avoidance orientation). Such a difference might not manifest as a difference in course performance or mean levels of specific motivational constructs, but rather as a differential factor in predicting persistence: if most teachers are subject to this pressure, then teachers with more such motivation may persist at a higher rate; meanwhile, students—not as subject to the same pressure—may show a null or negative relationship between Performance Avoidance motivation and persistence. Thus even under the same theoretical frameworks, it is possible that the linkages between motivational constructs and observed persistence may be substantially different.

Thus, the main research questions to be answered in our study are: (1) whether Achievement Goal Theory and Expectancy Value Theory constructs predict learner persistence in a free online course, and (2) whether the set of predictive constructs is the same or different between K-12 students and teachers enrolled as learners in the same course.

Methods

Course context

We conducted a correlational study of user behavior in a free, online summer course in virtual robotics (i.e., learning to program simulated robots within a virtual 3D environment). The use of virtual robots instead of physical robots reduces the cost, simplifies setup issues, and speeds up the learning process (Liu et al. 2013); these are all factors that are likely to be important in improving diverse access in open online courses. In the US, robotics is a topic that occurs in technology education classes, after school programs and clubs, home school weekend workshops, and summer informal learning programs. The beginner-level programming topics (e.g. movement commands, loops) covered in the course were considered common content that both students and teachers would need to know, so only a single, general-audience course section was made available, open to everyone. Students might take this course if they are interested in programming or in robotics, and teachers might take this course if they were planning on teaching programming or robotics in or after school in the coming years.

The course was comprised of four instructional units targeted toward beginner-level programming concepts (e.g., sequences of commands, loops, if and if-else statements), plus a capstone challenge. See Fig. 1 for an example instructional task. Following a common style of online instruction, typical instructional sequences consisted of a brief “follow-along” video demonstrating the use of a robotics or programming concept, followed by multiple-choice review questions, optional exploration tasks, and after a few such sequences, a programming challenge task based around the new concepts.

Fig. 1
figure 1

Screenshot of a typical lesson in the curriculum. Learners watch the video (left, top) and replicate the steps in the programming environment (right). The simulated robot runs the program in a 3D world that mimics a physical setting (center). The lesson page (left, bottom) continues with follow-up questions and additional activities

All instructional materials (instructions for programming challenges, brief open response and multiple choice questions, and interim exams) and research instruments were posted online within a Moodle course. A learner might be expected to complete the full course over 30 h of work. We study their use over a 10-week period during the summer, during which an instructor held live lecture and Q&A sessions at regular intervals. At the end of each instructional unit, learners were required to pass a multiple-choice examination (achieve a score of 70% or higher) on the unit content before being provided with access to the next instructional unit. Users were allowed unlimited attempts without penalty on these examinations. The university-based organization offering the course offered separate “certifications” for students and teachers upon completing the course and achieving a passing score on the final exam.

Participants

Recruitment into the course was done by the organization that created and offered the course. Methods included online advertising through the organization’s websites and e-mail lists, and in-person advertising through conferences and robotics competitions. Participants from a number of different countries signed up for the course, but they were primarily from diverse regions throughout the US; this kind of heterogeneity of learners is common in open online courses. Given our research questions, we selected only K-12 teachers and K-12 students who answered all the pre-survey items (i.e., excluding the small number of adult learners who were not teachers), resulting in a sample of 172 K-12 students and 114 K-12 teachers.

Instruments

In order to gain access to the course materials, each user was required to complete a pre-course activity that served as the key instruments for this research. Participants were not shown the results of the surveys or test, and were allowed to continue on as long as they submitted a response, even if it was left blank. The pre-test contained all of the instruments listed in this section. After each module, participants were asked to also complete the brief attitudinal surveys again.

Demographics

We asked participants to self-identify as one or more of: primary school student, middle school student, high school student, college student, informal educator, formal educator of a non-robotics subject, formal educator of robotics or computer science, homeschool teacher, and/or supervising adult. We later collapsed respondents into simple teacher and student groups, and at that time discarded responses from respondents who checked both teacher and student roles; non-participating adults (e.g. parents who were merely supervising); and college students, who may have fit into either category, but numbered too few to form a third category.

Computer science principles content items

We constructed a set of 14 researcher-designed questions aligned to a programming-focused subset of the recently developed Computer Science Principles (The College Board 2016), the most relevant standards for this topic. To be used sensibly as a pretest of relevant knowledge that could come from many different specific prior programming experiences, these items tested understanding of programming-related skills in non-programming and pseudo-programming contexts (i.e., not knowledge of programming language-specific conventions). Students most commonly began with a high D (just below 70%) and teachers with a high C (just below 80%). Psychometric analyses verified that all items were positively associated with overall performance and no items suffered from ceiling effects.

The example item in Fig. 2 is from a problem sequence involving a bridge that uses an automated traffic signal to control traffic getting on and off a single-lane, single-direction bridge. Vehicle weight is only measurable at the on-ramp, and the off-ramp sensor detects when a vehicle exits, but does not directly measure its weight.

Fig. 2
figure 2

Example test item aligned to CS Principles (4.1.1 Develop an Algorithm)

Motivation survey

We asked 21 six-point Likert questions on motivational constructs from both Achievement Goal Theory and Expectancy Value Theory. Items from existing validated surveys (e.g., Elliot and Murayama 2008; Wiebe et al. 2003; Sha et al. 2016) were used directly or adapted slightly to change the topical focus to computer science; the scales that were adapted to computer science were designed to be easily adapted across domains (e.g., “It is important to me that other students think I am good at [insert domain name]”) and have been validated many times across domains and age groups (e.g., Elliot et al. 1999; Elliot and Murayama 2008; Pekrun et al. 2009). Mastery Avoidance items were omitted from our instruments due to concerns about linguistic complexity for younger students (e.g., “One of my goals is to avoid learning less than I could about programming,” could be problematic for elementary school age students), to address concerns about test fatigue, and the weaker prior research connection of this construct to learner persistence.

See Table 1 for the list of constructs and mean (and standard deviations) for each scale, separately for teacher and student subgroups. Teacher and student means were generally similar, except that teachers had slightly higher expectancy of their own success (M = 5.1 vs. 4.9), and lower Performance Avoidance motivation (M = 2.8 vs. 3.3); see paired t-tests results in Table 1. Standard deviations were similar across students and teachers for each construct, ensuring that differential predictiveness did not result from restricted range problems within one group.

Table 1 Measures (number of items), sample items, internal reliability, pre-test mean values (with standard deviation), and t test significance for groups difference in pre-test means

Most scales had acceptable reliabilities as assessed by Chronbach’s α levels above .75 (see Table 1). Intrinsic Interest formed a two-item scale with Cronbach’s α = .77 for students and .68 for teachers. However, the second Intrinsic Interest in Programming item (“I would like to know more about programming”) exhibits signs of a ceiling effect (M = 5.4 on a 6-point scale), so we excluded it from model-building, retaining it only for descriptive purposes. Intrinsic Interest is therefore operationalized by a single-item scale “I love programming” going forward, except where noted.

Analysis

Initial motivation

We began with an analysis of mean motivation levels for students to teachers to examine whether the two differed (and thus have differential predictors of persistence because of greatly differing initial motivational states) and also to generally characterize the motivations of this population.

Persistence

This robotics course, like other free online courses, experienced significant user attrition with each successive unit. Of the n = 286 users completing the pre-course activity, only n = 93 (32%) reached the end of the second unit, n = 43 (15%) reached the end of the third unit, and n = 10 (3%) completed the last content unit by the end of the summer.

To have sufficient numbers of completers for statistical analysis, we defined Persistence in the course as a binary variable indicating whether a user had completed more than half the available curriculum “units” out of the four units (i.e., 93 of 286), as evidenced by a completed attempt on each unit’s exam. Since a passing score was required on each exam before the next section was made available, users were unable to access later units if they had not successfully completed the prior ones.

We performed separate stepwise logistic regressions to find the factors which achievement goals and which expectancy value factors best explained persistence among students and teachers along with pre-test scores. Factors were sequentially added stepwise if they significantly added predictive power according to the Likelihood Ratio. Exploratory analyses revealed that rate of persistence was generally a linear function of each predictor measure, except Utility, which had a significant quadratic element. However, including a quadratic Utility predictor did not result in a different final best fitting model. Therefore, the presented models involve only linear predictors.

Motivational change

Finally, we examined means and correlations in motivation between the levels reported on the pre-test and those reported at the end of the second unit exam (i.e. matching the cutoff used to operationalize persistence). We refer to values reported on this exam as the “post-test” values relative to our operationalized period for persisting in the course. We examined change across groups between pre and post to assess whether instability in motivation levels accounts for differential predictiveness of the motivational factors across learner groups.

Results and discussion

Who joins an online programming course?

Student learners (n = 172) in this course were primarily of middle school (53%) and high school (45%) age. Of the n = 114 teachers, 27% were informal educators, 32% were formal educators in non-robotics subjects, 46% formally taught robotics or CS, and 11% were homeschool teachers. Totals sum to more than 100% because educators often teach more than one discipline, or in multiple settings.

As shown in Table 1, users in both the teacher and student groups had relatively high prior intrinsic motivation to participate in the activity. Both teachers and students have a say in the selection of activities in which to participate, especially during the summer. Interest in learning the material, therefore, likely serves as a first-order filter, explaining the overall high value of “I would like to learn more about programming” (the excluded at-ceiling item, with a mean of 5.4 on a 6-point scale). Mean scores for the broader statement “I love programming”, while still in the “Agree” range, were lower (mean of 4.7 on a 6-point scale). The sample group also had high levels of Mastery Approach motivation (mean >5 on a 6-point scale), suggesting that the types of users who are attracted to a MOOC have an existing interest not just in the material, but in mastering it for its own sake. Performance Approach and Performance Avoidance motivation were moderate but more variable among individuals.

Who persists in the online course?

The first stepwise logistic regression analysis of persistence tested the three Achievement Goal Theory factors plus pre-test score (see upper left of Table 2). The procedure resulted in models with different significant predictors for students and teachers. The resulting student model included Mastery Approach motivation and pre-test score. The values shown in the Table are likelihood ratios. For example, a value of 2.17 means a student is 2.17 times as likely to persist for every one-point increase on the 1-to-6 Mastery Approach scale. The effect size for pre-test score is smaller (i.e., it is the equivalent of an 11% increase in persistence for every 10% increase in pre-test score). Other factors did not add significantly above those two included factors.

Table 2 Exp(β) (likelihood ratio) for factors in the best and forced logistic regression models of pre-scores predicting learner persistence separately for students and teacher groups

In contrast to the best student model, the best-fitting teacher model included Performance Approach motivation as a positive factor, and Performance Avoidance motivation as a negative factor (i.e., a completely different set of two predictors). These effects were quite large: every one-point increase in Performance Approach produces an almost 2.5 times increases in persistence, and every decrease in Performance Avoidance produced an almost 2 times increase in persistence [note that likelihood ratios below 1 reflect decreasing odds; a likelihood ratio of 0.53 corresponds to a 1.9 times decrease (1/0.53)].

A second set of stepwise logistic regressions with the Expectancy Value factors plus pre-test score once again produced different student and teacher models (see the lower left of Table 2). The resulting student model included Intrinsic Interest and pre-test score. Perhaps because it relied on only a single item, the effect size of the prediction was not quite as large as the predictiveness of Mastery Approach, but was still an important predictor. The other factors did not add significantly above those included factors. The resulting teacher model included Identity and Intrinsic Interest. The effect sizes of these predictors were large.

To confirm that the student and teacher models were in fact different rather than nearly-as-good for both groups, we ran both groups of logistic regressions again, forcing the opposite group’s model on each data set. In the Achievement Goal analysis (see the upper right of Table 2), Mastery Approach motivation was able to explain some of the variance in teacher persistence at a marginal level of significance, but pre-test score was not significant. Neither Performance Approach nor Performance Avoidance were significant for students. In other words, the best teacher model did not fit the students at all, and the best student model only half fit the teachers.

In the Expectancy Value analysis (see the lower right of Table 2), Intrinsic Interest was already present in both models, and remained significant. However, pre-test score was not significant for teachers, and Identity was not significant for students. In other words, the obtained predictive Expectancy Value Theory models for each group were only predictors for that specific group.

Figure 3 presents the separate relationship of each predictor to persistence visually for teachers and students, to examine whether there are similar trends in relationships that are perhaps only statistically significant in one group. For each predictor variable, we divided the variable range, and grouped responses in these bins to compute a mean level of persistence for each bin. The y-axis shows the mean persistence for learners within a given predictor bin, and the x-axis show the included ranges for each bin. The number of bins was adjusted to reflect the distribution of pre-test levels so that no bin had very few responses, but similar results are obtained from using other bin ranges. A best-fitting linear regression line (dotted line) for each group is added to each graph. For only the interest item are the trends in both groups exactly the same (as the logistic regressions also found), and only the Mastery Approach constructs shows the same directional pattern but with different degrees of influence across both groups (as the logistic regressions also found). For the other four motivational constructs, persistence only related to the motivational factor for one group. For example, persistence goes up in students as a function of pre-test score, but the trend is actually slightly negative in teachers.

Fig. 3
figure 3

Separately for teachers (gray squares) and students (black circles), mean persistence (and SE bars) for binned ranges on the six pre-test measures that were significant predictors of persistence for at least one group

Follow-up analyses of motivational change

It is possible learner goals were dynamically changing during the course in uneven ways across groups. For example, perhaps Performance Approach goals was not a significant predictor for students because many students changed the extent which they had these goals during the learning (i.e., the pre-test did not measure stable characteristics). To examine this possibility, we conducted two complementary follow-up analyses on the (n = 33) teachers and (n = 60) students who had completed the surveys at pre and post: one analysis examined whether there were differential pre-post mean shifts in construct values for teachers versus students, and a second analysis tested for differential relative order stability (or test–retest reliability) within each group.

A two-way mixed ANOVA on teachers and students looked for overall pre-post changes in mean predictor values (see Table 3 for pre and post means). There was no significant shift in most predictors in the model, except for intrinsic interest, which increased for both groups (p = .055 for teachers, p < .001 for students). However, there were no statistically significant interactions for the constructs included in the best fitting regression models, indicating that teacher and student values shifted in similar ways for all constructs in the model. Expectancy, which was not part of the model, did show a marginally significant (p = .057) interaction between time and group, in addition to an overall pre-post shift (p = .03).

Table 3 Test-retest mean scores and Pearson correlation coefficients for motivation construct values before versus after two course modules

An examination of the pre-post correlations revealed that most motivational constructs were very stable (r ≈ .7) (see Table 3), indicating that individuals with low pre-scores on a given measure tended to also have low post-scores, and those with higher pre-scores tended to have higher post-scores. Most importantly, the lowest stabilities were still fairly high (above .4) and predictor stability did not correspond with predictor significance—in particular, lower stability did not prevent predictors from being included in the model: the two measures with the lowest pre-post correlations for teachers—Performance Avoidance motivation (r = .54) and Identity as Programmer (r = .44)—were both significant in the teacher model. This rules out predictor instability as an alternative explanation for which factors were significant predictors.

Finally, a brief analysis was done to explore the alternate hypothesis that our observed results were the product of advanced students “skipping ahead” and taking the quiz without completing the chapter activities. In other words, did students with high pre-test scores appear to persist but not actually complete all the activities? To test this possibility, we constructed a dummy variable called “skippers” to indicate participants who completed the chapter exam without also completing the final programming challenge in the unit. We then ran a forced logistic regression which found pre-test scores to be a non-significant predictor (p = .76) of “skipper” status. When skipping occurs, it is not systematically due to high learner prior knowledge. Thus, pre-test is predicting actual completion of activities (i.e., true persistence).

General discussion

Who persists in an online programming course?

Recall that motivational constructs from both Achievement Goal Theory and Expectancy Value Theory were expected to predict open online course persistence. Here we present a summary of the findings.

Common factors

Intrinsic interest in the subject matter itself (Interest1) predicts both teacher and student motivation to persist in an online course. This is consistent with the EVT view of Intrinsic Interest, where it is a core contributor to Value, and should increase motivation to persist in learning activities. This factor may be especially relevant in the summer learning environment, where a greater variety of competing options like playing outside or going on vacation are viable, compared to the classroom environment, which offers few alternative choices.

Differential factors

There were also different factors, however, predicting student and teacher persistence: (1) Pre-test score predicts persistence for students but not teachers; (2) Performance Approach and Performance Avoidance motivation explain teacher persistence, while student persistence is better modeled using Mastery Approach; and (3) Identity is a significant predictor for persistence in teachers, but not students. Existing motivational theories (and common intuition) would suggest that, since teachers and students were both operating as learners in the course, students and teachers would exhibit the same effects of motivations on behaviors, yet they do not.

That teachers have different needs from students is consistent with what we know about teachers’ support needs for classroom practice (Spillane and Zeuli 1999; Mishra and Koehler 2006). But here we saw such differences emerge in a new location: in persistence within the training itself. It is likely that different amounts and combinations of external pressures are present for students and teachers, i.e. the differences in motivational effects are attributable to environmental factors. For example, the demand for teachers to appear knowledgeable about subject material they will later teach might drive them to persist through the learning activities, whereas for students the learning itself may the primary goal of such an experience (i.e., a difference in the importance of extrinsic versus intrinsic goals).

Alternatively, it could be that greater levels of maturity allow teachers to cope with frustration differently. That is, the differences are potentially due to differences in internal mechanisms for motivation and the way in which motivation regulates action. Modern theories of self-regulation in learning emphasize the combined importance of motivational and metacognitive factors in guiding learning (Azevedo 2005; Pintrich and DeGroot 1990; Zimmerman and Schunk 2001).

A third possibility, that achievement goals are dynamically changing during the course in ways that are different for students and teachers was ruled out by a follow-up analysis: while there were small overall increases in intrinsic interest for both groups, there were no significant interactions between teacher/student group and time for any of the constructs in the model, and individual pre-post correlations were high.

Finally, there was no evidence to support the alternative explanation that our observed effects of prior knowledge for students were the product of advanced students “skipping ahead” and taking the quiz without completing the chapter activities. While there were some students who skipped ahead, as is often seen in online courses, this behavior was not associated with pre-test knowledge.

Implications for theory

The differentiation in models we observed has implications for theories of motivation in lifelong learning. The finding is not that students and teachers have different amounts (i.e., mean values) of motivation, it is that different types matter for each group. Current Achievement Goal Theory does not explain how two sub-populations of learners with the same overall levels of Mastery Approach, Performance Approach, and Performance Avoidance motivation can support two different models of persistence. Among students, Mastery Approach goals were a key differentiator between those who persisted in the course, and those who did not. Performance Approach and Avoidance were not predictors for students, even when forced into the model. Teacher persistence, on the other hand, was most strongly predicted by exactly those factors—their levels of Performance Approach and Performance Avoidance motivation were lower on average, but teacher-to-teacher differences more strongly predicted persistence. The effects of Mastery Approach were still present: when forced into the model, Mastery Approach motivation was only a marginally significant predictor.

In the past, the predictiveness of Performance Approach motivation has varied across combinations of learning content and contexts rather than within studies across populations learning the same content. However, in a meta-analysis conducted by Hulleman et al. (2010), students’ age (roughly indicated by their grade level in school) was not a moderating factor in the relationship between goal orientations and performance outcomes. That is, the ways in which Mastery Approach, Performance Approach, or Performance Avoidance goal orientations predicted student outcomes were the same even at different grade levels (on average across studies). Of course, it may be that there are qualitative differences once learners reach adulthood that have not been studied sufficiently. The current study suggests there are real differences in the predictive behavior of the different AGT factors in student versus teacher populations. AGT needs to be expanded to explain this type of moderation relationship, whether by external factors (e.g., reasons for task completion) or internal factors (e.g., different regulatory abilities), or a differential stability of achievement goals within student versus teacher groups. Further, it is possible that subgroup effects like the student–teacher differences seen here could explain some divergent findings in past research as well.

Expectancy Value Theory similarly must explain the observed moderation of persistence predictors across groups. At a higher level, EVT predicts that some kind of value is required for persistence, and both identity and intrinsic interest are kinds of value. But EVT does not currently have a strong accounting for why different Value terms are important predictors between students and teachers: Intrinsic Interest is strong in both groups, but Identity is only a significant predictor for teachers. Perhaps teachers, as older individuals, have a more established sense of self-identity, and use it more effectively to overcome frustration. If this is indeed happening, however, it means that Identity is not functioning as a component of the Value term; it is instead dampening the Cost function.

That pre-test scores per se predict student persistence is unexpected. Since participants were not shown their pre-test scores and because expectancy scores were not predictive, it means that the effect is between actual prior knowledge and persistence rather than between perceived prior knowledge and persistence. Actual knowledge is expected to directly affect learning (Durik et al. 2006) and presumably perceived and actual knowledge are correlated. Indirectly, perceived knowledge can be related to persistence. Because the effect should be indirect (i.e., perceived knowledge affects persistence choices through Performance Approach/Avoidance and Expectancy), actual ability should not normally predict student choices when controlling for achievement goals and expectancy. It could be that pre-test scores are genuinely indicative of a generalizable proficiency in programming-related thinking (i.e. Computational Thinking skills) which enables students to proceed more quickly and with less effort, lowering the Cost function under EVT. Both AGT and EVT might simply interpret such an effect as simply indicating that more proficient students have less frustration to overcome. In either case, the accounting does not explain the fact that this relationship holds for students but not teachers. As before, it suggests the presence of differences in environmental, internal, or goal stability factors.

Implications for practice

Courses like the one we observed are both free of charge and completely open with regard to enrollment, and are thus likely to attract users from more diverse demographic categories than traditional classes. This is favorable from a broader computer science education goal of increasing inclusiveness in the field of Computer Science overall (Smith 2016). However, as logical expectations and our results support, this leads to both a more diverse set of needs, and, if these needs are not met, a greater rate of user attrition. In order to reduce the number of learners who turn away from the course before completing it, appropriate supports must be put in place that recognize and perhaps even take advantage of the non-uniformity of the learner population.

We have now identified a new, important demographic line–students versus teachers–along which learners divide with respect to the ways in which their goal structures affect their likelihood to remain the course. For students, having a greater prior knowledge was associated with persistence. To better support students in these courses, perhaps the pre-test measure could be used to provide additional scaffolds for students with weaker pre-test scores. For example, these students might be directed to additional, easier, challenges. It is interesting, however, that the teachers did not appear to need such additional supports, even when they were at similarly low pre-test levels.

Teachers with a stronger sense of identity to the learning domain persist at a higher rate; while our current findings are only correlational, it now seems reasonable that an intervention designed to increase teachers’ sense of programmer identity could improve their rate of persistence (e.g., having teachers write a short essay that asks them to describe how programming fits with their life goals), while doing so with students is less likely to be useful. That some kind of discrepancy in learning occurs between students and teachers is not new; there are policy and environmental conditions affecting teachers that simply do not apply to students (e.g., credentialing obligations), and it has long been known that there unique types of knowledge (e.g., Technological Pedagogical Content Knowledge; Mishra and Koehler 2006) which teachers must acquire in order to teach effectively. Professional Development programs that support teachers’ mastery of the underlying learning domains have had to take these factors into account in the past; since open courses will include teachers as part of their learner group, they too may need to start making teacher-focused accommodations. For example, since our study indicates that identity as a programmer is positively associated with persistence for teachers (and not harmful to students), an online programming course might deliberately begin calling its participants “programmers” and reinforcing learner self-perceptions of themselves as such. The success of such an intervention could, in turn, provide evidence about whether this relationship is causal in nature.

There are almost certainly more such divisions of learner types within larger MOOCs that could shape the role of motivation in persistence and learning. However, we have also seen that learners who come to the course have one thing in common: an initial intrinsic interest in learning the material. A wisely-designed course might seek to harness this common element, immediately reaching attendees across demographic boundaries. For example, early materials could focus on especially exciting content and allow additional learner choice, two methods that generally improve domain interest and persistence (e.g., Hidi and Renninger 2006; Pelletier et al. 2001). Course designers in open, online learning environments must therefore take the time to understand what kinds of learners are present within their programs, then leverage both universal and group-specific supports to reduce learner attrition in each.

Limitations and future directions

The course we studied bore many resemblances to traditional MOOCs, including: (1) free enrollment and provision of curriculum; (2) opportunities to interact with course staff and other learners; (3) a mix of pre-recorded, live, and interactive materials; and (4) recognition for completion. However, the course size was modest by MOOC standards (n = 286), and lacked the resolution of instrumentation (e.g. the “clickstream”) that mainstream MOOC platforms provide. Appropriate caution should therefore be taken when generalizing from these results to MOOCs of larger scale. At the same time, there are many smaller scale open courses offered by many different organizations, and the results obtained here will be especially relevant to those kinds of courses.

MOOCs also differ with regard to internal policies around progress and completion. The course we studied used a strict form of progress enforcement that required learners to pass a quiz before proceeding to the next content unit; not all online courses do so. This “gating” mechanism might have been expected to increase attrition overall and strengthen the relationship between persistence behavior and learner knowledge. Yet, in practice, approximately 3% of initial enrollees completed the course, a level comparable to other MOOCs (Gutl et al. 2014). But more importantly, initial learner knowledge predicted persistence for K-12 students but not teachers under these circumstances. If strict progress enforcement created this effect, it would confirm that these two learner groups responded differently to a single designable course feature. If the progress enforcement did not play a role, then it speaks again to the underlying differences between the two groups.

One potential limitation is that our study occurred in a specific, limited timeframe during the summer. Both students and teachers may have different goals during the summer than they do for the rest of the year, as students often engage in more interest-based activities (compared to normal classrooms), and teachers engage in professional development to prepare for the coming academic year. It is possible that the patterns of motivation we observed are stronger for such elective opportunities than in compulsory attendance environments, or for learning during holidays than for learning that occurs during semesters.

There were also a few theoretical limitations to our results. The omission of a Mastery Avoidance scale from the survey may have omitted an important predictor for adult learners, as Mastery Avoidance—sometimes characterized as the desire to avoid being unknowledgeable or avoid losing existing skills—can be more influential in older adults (Ebner et al. 2006). However, as items seeking to measure Mastery Avoidance are lexically complex due to the multiple negations contained in the construct, the inclusion of such items could simply have introduced a new confound for younger participants and those with lower reading comprehension skills. Additionally, since heterogeneity of motivational predictors was also present among the Expectancy-Value factors, we do not believe the inclusion of Mastery Avoidance would have changed the overall result showing factor × learner group interactions.

In an analysis of persistence patterns in the HarvardX MOOC, Reich (2014) concludes that persistence rates vary greatly depending upon whether an enrollee intended to complete the course at the outset. While this specific distinction was not present in our data, and we were therefore unable to study or control for it in our design, it is still the case that “intent to complete” (as well as the “reasons for enrolling” reported in Breslow et al. 2013) lack the organized, empirically validated dimensions of construct differentiation upon which our study focused.

As a future research direction, there is a great potential for follow-up investigations into the context surrounding the effects we have observed in each of the two learner groups. Do performance goals matter more to teachers because they are under certain kinds of administrative pressure? This also affords an opportunity to refine survey instruments like our own to account for learner group-specific factors, such as whether teachers are taking the course as a primary form of instruction or as a way to refresh their existing knowledge of programming.

Finally, our study addressed only one type of learner heterogeneity: role differentiation between teachers and students. Many more distinctions could be made in the full online learner population, and we do not endorse teachers versus students as the only important distinction.

Conclusions

Our study examined the ability of motivational constructs from the educational motivation literature to predict user persistence in a MOOC-like online learning environment. We found that such factors were successful in predicting which users would persist, and which would leave, and that different patterns of motivation predicted persistence for different types of users. Persistence among K-12 students was predicted by prior knowledge, mastery approach orientation, and intrinsic interest. Persistence among teachers was predicted by performance approach orientation, performance avoidance orientation, intrinsic interest, and identity. This result underscores the direct relevance of issues of learner diversity to both motivational theory and MOOC design.