Introduction

A learning management system (LMS) is an information system that facilitates e-learning by supporting teaching and learning activities and the administration and communication associated with them. LMS adoption has been rapid, with the majority of higher education institutions now using LMS as an integral part of their course delivery (Browne et al. 2006). Whilst LMS offer administrative advantages to universities and may increase the pool of students available to them, the value of LMS for improving teaching and learning has been questioned (Sclater 2008). Despite the promise that LMS might be used to transform education by improving social and constructive learning (Rudestam and Schoenholtz-Read 2002), few instructors adopt them as more than communication and material distribution tools (Becker and Jokivirta 2007).

Furthermore, there has been little research on the impact of LMS on students (Coates et al. 2005; Wang et al. 2007). Students have reported that the primary benefits to them of LMS use are efficiency, particularly in access to materials (Lonn and Teasley 2009), and flexibility, particularly in terms of the location and timing of their participation in learning activities (Piccoli et al. 2001), and whilst some students also perceive that LMS use improves learning per se, educational researchers emphasise that it is not so much the LMS itself but the way it is used by the instructor that produces benefits for students (Dillenbourg et al. 2002; Reeves et al. 2005). In an EDUCAUSE Research Bulletin, Sclater (2008) suggested that poor student engagement with LMS may contribute to lack of educational innovation, and a number of researchers have noted that instructors often limit their uses of LMS because of perceived additional demands on their time (Papastergiou 2006). Thus, the perceived value of an LMS to both students and instructors is likely to affect how students use it and how they benefit from their use.

In the information systems (IS) field, the term involvement is used to jointly describe two aspects of the perceived value of an IS that appear to be relevant in the LMS context. Involvement is “a subjective psychological state reflecting the importance and personal relevance of a system to the user” (Barki and Hartwick 1989, p. 53). Thus, involvement differs from the act of system use, but it seems reasonable to expect that differences in use and the benefits of use might be observed with differences in teacher and student perceptions of the importance and relevance of the LMS to them.

Researchers in the IS field include system use and the benefits of use among a number of measures of information system success (DeLone and McLean 1992). Indeed, following this approach, the success of an LMS can be measured in several ways which recognise that the success of the system might vary when viewed by different actors; for example, a system which is technically sound and meets all technical performance standards might still not enable students to study more efficiently or obtain other benefits from use. From comprehensive studies of how IS success has been measured by researchers, decision makers and users, DeLone and McLean (2003) identified six measures: the technical quality of the LMS as an information system (its system quality), the quality of the information that may be obtained from it (information quality), the quality of support and services that enable and assist users (service quality), the extent and nature of system use, user satisfaction with the system, and the benefits that are obtained from use. In this study, we adopt this set of success measures as a framework for understanding the role of involvement in “LMS success”.

LMS success

The success of LMS has been evaluated using measures at both the individual level and the organisational level (Alexander 2001). Because the emphasis of this study is on understanding the role of involvement in LMS success, as perceived by the student, we will adopt individual level measures of success. In defining success, we are guided by the IS success measures outlined by DeLone and McLean (2003). DeLone and McLean order the success measures in a sequential model (see Fig. 1) which proposes that system quality, information quality and service quality affect user satisfaction and system use which in turn influence the net benefits of system use.

Fig. 1
figure 1

DeLone and McLean (2003) model of IS success

The DeLone and McLean (2003) model has been applied in many different domains (Petter et al. 2008), but has received little attention in the e-learning domain, although Wang, Wang and Shee (2007) developed an instrument to measure LMS success based on the model (but did not test it), and Holsapple and Lee-Post (2006) used the model as a framework for an action research study into e-learning system development. These studies, along with the wide range of dimensions of success that the model encompasses, suggest that the aspects of success that it incorporates should provide useful measures for study of LMS success. In this study we concentrate on those success measures which can reasonably be considered to be influenced by student and instructor involvement with the LMS. These are: LMS use, satisfaction with the LMS, and reported benefits. Each of these success measures is described below.

LMS use. IS use may be measured in a variety of ways (Burton-Jones and Straub 2006), including duration, frequency, and intensity (Venkatesh et al. 2008) and use for different activities (Doll and Torkzadeh 1998). Because all types of LMS use might contribute to successfully obtaining benefits from use, and use for social learning may be particularly beneficial (Rudestam and Schoenholtz-Read 2002), type of use and amount of use are both of importance in this study.

Student satisfaction with LMS. User satisfaction relates to the attitude or response of an end user towards an IS. In this study, student satisfaction with LMS is defined as the satisfaction of the student with their use of the LMS for learning in a particular course.

Student benefits. Whilst most students report efficiency as a significant benefit from their use of LMS, some also perceive improvements in learning (Lonn and Teasley 2009). Both types of benefit are of interest in this study. Reports of efficiency benefits are consistent with the work process benefits that are typically considered in IS success studies (Goodhue 1995). Any actual improvements in learning are likely to rest on the way any single instructor implements a given course in the LMS (Dillenbourg et al. 2002) and, across a whole university, such effects are likely to be difficult to detect without a widespread transformation of teaching, and this has not occurred to date (Becker and Jokivirta 2007). Nonetheless, students’ perceptions of the extent to which the LMS has contributed to the learning are of interest, along with more objective measures of learning such as expected examination results (Wood and Locke 1987).

System quality, information quality and service quality are the remaining constructs from the DeLone and McLean model of IS success. LMS system quality is concerned with issues such as ease of use, reliability and security. LMS information quality relates to the characteristics of the information that the LMS produces. It is concerned with issues such as the timeliness, accuracy, relevance and format of the information provided. LMS service quality refers to user perceptions of the various conditions relating to support for system use. In this study, aspects of service quality that may be relevant include the relationship of LMS users with support staff, availability of technology, and provision of training. In some studies of IS use, these aspects of service quality have been described as facilitating conditions (Goodhue and Thompson 1995; Staples and Seddon 2004; Venkatesh et al. 2008).

Influences on LMS success

Most research to date on student use of LMS has focused on the relationship between LMS quality and student satisfaction with the system. Both system quality (Roca et al. 2006; Chiu et al. 2007; Sun et al. 2008) and information quality (Roca et al. 2006; Chiu et al. 2007) have been shown to influence satisfaction. The role of service quality is less clear: whilst Roca et al. (2006) found that service quality played a role in determining learner satisfaction, Chiu et al. (2007) found that it did not, and suggested that service quality may be more important in the overall success of e-learning within an institution rather than at the level of individual courses.

Previous experience with information technology has also been found to have an impact on student satisfaction and perceived learning effectiveness (Wan et al. 2008) and a cognitive factor, perceived usefulness, has also been associated with use and satisfaction (Hayashi et al. 2004; Roca et al. 2006; Sun et al. 2008).

Student involvement and LMS success

Research on involvement in a range of domains has shown it to have a positive influence on system usage (Barki and Hartwick 1991; Hartwick and Barki 1994; Mills 2006), user satisfaction (Blili et al. 1998; McGill and Klobas 2008), and benefits for individual users (McGill and Klobas 2008), as well as perceived usefulness (Hwang and Thorn 1999; Markus and Mao 2004). In this study, student involvement with LMS is defined as the importance and personal relevance of the LMS, as implemented in a particular course, to a student. Because LMS are IS, it seems likely that the level of student involvement with an LMS will influence the success of the system. The following hypotheses are therefore proposed:

H1a: Student involvement with an LMS positively affects student use of the LMS.

H1b: Student involvement with an LMS positively affects student satisfaction with the LMS.

H1c: Student involvement with an LMS positively affects student benefits from LMS use.

Instructor involvement and LMS success

Because the particular implementation of an LMS in a given course depends on the instructor, the instructor’s level of involvement—the importance and personal relevance of the LMS to the instructor—is also likely to influence the success of the LMS for students. Course environments in an LMS can vary from simple repositories of material to be downloaded by students to complex environments that facilitate interaction between students and with instructors. Instructors design the interface for their course and implement different levels of functionality (e.g., interactive quizzes, calendars, chat rooms) and they can spend substantial amounts of time and effort establishing a course within an LMS and then interacting with it as a user whilst the course is running. The time that an instructor spends interacting with students can range from very little to levels beyond that possible in a regular course that has 4–6 h of contact. Thus, it is apparent that instructor involvement with an LMS course offering can vary widely.

Student satisfaction with e-learning has been shown to vary with instructor participation and interaction (Hiltz 1993; Swan 2001) and instructor attitude toward e-learning (Chyung and Vachon 2005; Sun et al. 2008). Liaw et al. (2007) investigated the role of instructor-led learning in the success of e-learning and found that positive student perceptions towards extensive instructor presence was associated with e-learning as an effective learning environment. Given the positive influences of the factors associated with characteristics of the instructor, it seems likely that instructor involvement will influence LMS success measures. Hence the following hypotheses are proposed:

H2a: Instructor involvement with an LMS positively affects student use of the LMS.

H2b: Instructor involvement with an LMS positively affects student satisfaction with the LMS.

H2c: Instructor involvement with an LMS positively affects student individual outcomes.

Summary of hypotheses

In Fig. 2, the relationships to be tested in this study are mapped onto the DeLone and McLean (2003) model. Our model proposes that, in addition to the relationships posited by the DeLone and McLean model of IS success, in the LMS domain student and instructor involvement play critical roles in determining the benefits of LMS use.

Fig. 2
figure 2

Hypothesised effects of involvement on LMS success, mapped to the DeLone and McLean (2003) model of IS success

The figure shows that, following the DeLone and McLean model, LMS system quality, LMS information quality and LMS service quality directly influence levels of LMS use and learner satisfaction, which in turn affect student benefits. Student involvement and instructor involvement are proposed to influence student benefits both directly and indirectly via their influence on satisfaction with the LMS and on level of LMS use. When students and instructors are highly involved with the LMS offering for their course (i.e., they consider it important and relevant), this positively influences their satisfaction with the LMS, their level of LMS use, and the benefits they obtain from use.

Method

A possible approach to testing the hypotheses in this study would be to use structural equation modelling (SEM) to test average effects across the model as a whole. We chose, instead, to use regression analysis for two reasons. Firstly, our primary interest in this study is in the effect of involvement on LMS success, as measured in three ways (student satisfaction, use and benefits) rather than in the end-to-end flow of effects from quality to benefits. Secondly, given the number of external factors potentially associated with satisfaction, use and benefits (student experience with information technology, the Internet and LMS, the course they are taking, and demographic factors such as gender and age) we wanted to control for these external factors rather than averaging out their effects across the sample as a whole or a small number of pre-defined groups.

Participants

Participants were 244 students enrolled in a small comprehensive Australian university which had used WebCT as its LMS for around 8 years and had recently implemented WebCT Campus Edition 6 (CE6). Tables 1 and 2 summarise their characteristics.

Table 1 Participant characteristics
Table 2 Experience with computers, Internet and LMS

Recruitment and sampling

A form of quota sampling was used with the aim of obtaining 200 or more responses from students enrolled in a wide variety of courses and degree programmes, in order to minimise group level effects associated with any specific degree, instructor or course. Email was sent 2 weeks before the end of semester examination period to all students (approximately 3,000) enrolled in 14 different degrees. Recipients were invited to participate in the study by clicking on a link to complete a questionnaire on the web. The questionnaire took approximately 10 min to complete. Completion of the questionnaire was voluntary and all responses were anonymous. Data collection ceased during the examination period when the quota was reached and there was sufficient variation in experience, age, gender and field of study.

Measures

With the exception of use, all variables included in the hypothesis tests were measured on multivariate scales. Where possible, the scales were based on scales developed and tested by researchers working in other IS domains, and were adapted for the LMS domain. In some cases, new items were developed where we felt that existing scales or items may not adequately represent the concept in the LMS domain. The sources of scales and items and adaptations made are described in this section. The items themselves are listed in the Online Resource 1 (see supplementary material).

Because LMS are usually adopted university-wide, students are likely to use an LMS for several different courses throughout their study. Furthermore, because the exact LMS environment that the student experiences in a course depends on how the LMS is used by the teacher, student experiences of the LMS are likely to vary by course. In this investigation, students were asked to respond to questions with respect to one course they were currently taking that used WebCT. The WebCT implementation of the course was called “the site” in the questionnaire.

Independent variables: student involvement and instructor involvement

Involvement was operationalised with Barki and Hartwick’s (1991) involvement instrument to measure the perceived importance and personal relevance of an IS. The scale is a seven point semantic differential with 11 items. In this study students were asked about both their own involvement with the LMS site for the course and their perceptions of their instructor’s involvement. While there is a risk of error in use of student perception of instructor involvement as an indicator of actual instructor involvement, Skinner and Belmont (1993) found that instructor reports of their involvement were correlated with student perceptions of instructor involvement for traditional teaching. Furthermore, students’ levels of LMS usage have been shown to be influenced by their beliefs about how important their instructors think it is for them to use the LMS (McGill and Hobbs 2008), and McCombs (2003) stressed the importance of capturing student perceptions of instructor engagement. Given the difficulty of mapping actual instructor involvement to student evaluations of the LMS without violating the confidentiality of either student or teacher, we therefore used student perception of instructor involvement to represent instructor involvement in this study.

Principal axis factoring (with oblimin rotation, delta = 0) showed that student assessments of their own involvement and instructor involvement formed two distinct factors with no cross-loadings. Both factors were formed from the same set of 10 items. One item from each set (7-unexciting … exciting) did not load with the others, but formed a separate factor. This item also failed to load as expected in a combined test of the involvement scale and an attitudes scale conducted by Barki and Hardwick (1994) and was therefore omitted in estimating involvement for this study. Details of the factor solution are provided in Table 3.

Table 3 Factor solution for involvement

Cronbach’s alpha for the 10 item scale was .958 for student involvement and .980 for instructor involvement. Student involvement was calculated as the mean score on the 10 retained student involvement items, and instructor involvement was calculated in the same way. Mean student involvement on the seven point scale was 6.00 (SD = 1.09) while mean instructor involvement was lower (although still high) at 5.56, but with greater variation (SD = 1.41).

LMS success

This section describes how LMS success was measured in this study. Measurement of use is explained first, followed by detail of how the other success measures were derived from a pool of items drawn primarily from earlier research.

LMS use. Student and instructor involvement may encourage use of different LMS features in different ways; for example, higher instructor involvement might be associated with higher student participation in online discussion (OLD) and higher student involvement might be associated with more use of the LMS to access topic notes and materials. Use was therefore measured in terms of both type and duration of use. Students were asked to report (in hours per week): total time spent using the LMS site for the course offering, time spent using the OLD tool for the course offering, and time spent accessing topic notes and materials for the course offering. The distributions of all three measures of LMS use were highly skewed with quite low median levels of use but long tails and high maximums showing that some students used the LMS much more than others while some students reported no use at all. The distributions are summarised in Table 4.

Table 4 LMS use, in hours

Two indicators of success in terms of LMS use were developed from this data: a measure of total use, and a measure of type and extent of use. The natural logarithm of total hours of use was used to represent total use while the variable to measure type and extent of use (useTE) was derived from a cluster analysis of hours spent accessing topic notes and course materials and hours spent using OLD. To develop useTE, the SPSS 17.0 TwoStep cluster procedure was used with the number of clusters selected on the basis of Schwartz’s Bayesian Information Criterion (BIC). The two input variables were the natural logarithm of hours accessing notes and materials and the natural logarithm of OLD hours. The noise handling option was invoked because of the extreme outliers in maximum hours of use. Five clusters were identified and both hours accessing notes and hours using OLD contributed significantly to the definition of the factors. Each cluster distinguished between a different combination of type and extent and use and the cluster to which each student belonged was recorded in the categorical variable, useTE. UseTE took the values 1 (very low use) for students whose use was low for both materials access and OLD; 2 (low use) for students whose use for both purposes was average (i.e., at the median) or just below and whose overall use was below the average; 3 (high materials) for students whose use was above average with more time spent accessing materials than participating in OLD; 4 (high OLD) for students whose use was above average with more time spent participating in OLD than accessing materials; and 5 (very high use) for students whose use for both purposes was above average. The number of students that was classified in each group is shown in Table 5 along with median usage for materials access, OLD, and in total, for students in each group.

Table 5 Use, classified by type and extent of use (useTE)

Student satisfaction with LMS. To develop a pool of items to measure student satisfaction with the LMS two items were adapted from Roca et al. (2006), two items from Seddon and Kiew (1996) and one item from Klobas and Clyde (1998). All items were measured on a Likert scale from 1 (strongly disagree) to 7 (strongly agree).

Student benefits. Student benefits were measured in two ways: with Likert scales and with student reports of expected grades on their next assessment. Four items were used to gauge students’ perceptions of the benefits of their LMS use. Three of these items addressed process benefits in terms of efficiency and productivity; these items were adapted from Goodhue and Thompson (1995) to the context of student use of LMS. An additional item to measure the perceived contribution of LMS use to learning was developed for the study. All of these items were measured on a Likert scale from 1 (strongly disagree) to 7 (strongly agree). As recommended by Staples and Seddon (2004) and van Raaij and Schepers (2008) a more objective measure of individual student outcomes was also sought. It was not possible to obtain actual student grades from an independent source, instead (following Wood and Locke 1987) an indicator of actual learning was obtained by asking participants what mark or percentage they had received for their last test, exam or assignment and what percentage mark or grade they expected to get for the next exam in the unit. If they reported a letter grade, the researchers translated the letter grade to the midpoint of the range of marks for that grade. Both last percentage (M = 75.84, SD = 12.6) and expected percentage (71.37, 9.8) were approximately normally distributed in the sample. Two extreme outliers (reporting marks in the last assessment below 30%) were omitted from analyses.

LMS quality. Students were also asked to evaluate LMS quality along the three dimensions included in the DeLone and McLean (2003) model. All items were measured on a Likert scale from 1 (strongly disagree) to 7 (strongly agree). Items to measure student perceptions of LMS system quality were derived from a range of sources in order to capture the aspects of system quality relevant to student LMS use. Nine items were used. Ease of use was measured using two items from Davis (1989) and one item from Doll and Torkzadeh (1988). Perceptions of site structure and the way the system supports the process of use were measured using one item each from Davis (1989), Aladwani and Palvia (2002) and Roca et al. (2006). An additional three items were included to capture reliability (Etezadi-Amoli and Farhoomand 1996), security (Rivard et al. 1997) and response time. LMS information quality was measured using six items from Doll and Torkzadeh (1988). Four items to measure LMS service quality were drawn from scales used in earlier studies to measure facilitating conditions: two items from Baroudi and Orlikowski (1988), one item from Thompson et al. (1994) and one item from Taylor and Todd (1995). They were supplemented by two items drawn from Klobas and Clyde’s (1998) study of Internet use in universities and adapted to the LMS domain, and one item developed for this study.

Construction of the final satisfaction, benefits and quality scales. Since items to measure student satisfaction with LMS, student benefits and LMS quality were drawn from a range of sources, factor analysis (principal axis factoring with varimax rotation to minimise the correlation between factors) was used to identify a parsimonious set of items to measure each of the factors measured with Likert scales. It was not possible to obtain a solution which included all items due to collinearity (correlation above .9) among some items. One item of each collinear pair was therefore omitted from the analysis. Once an initial solution was reached, the pool of items was further reduced to obtain simple structure by omitting items with low communalities or cross-loadings of .4 or more. The final factor solution is shown in Table 6. It consists of all but one of the information quality items from the Doll and Torkzadeh (1988) scale (the omitted item—timeliness of information—cross-loaded with system quality) and reduced sets of items from the larger sets drawn from various sources in order to construct scales for this study. This solution explained 71.51% of the variance in the data. Cronbach’s alpha was very good (above .8 as shown in Table 6) for all scales except the scale to measure system quality where it was .63.

Table 6 Factor solution for five IS success factors

Scales to measure each of these factors were developed by taking the mean of the items that loaded on each factor as shown in Table 6. Table 7 presents the summary statistics for these scales. Service quality and satisfaction were rated on average as below the mid-point of the seven point scale, although there was higher variation in satisfaction. System quality and information quality were rated above the mid-point of the scale, although not as high as student benefits.

Table 7 Summary statistics for five LMS success factors

Regression modelling

Our hypotheses about the effect of student and instructor involvement on IS success were tested by estimating two regression models for each of the dependent variables, LMS use (total use and useTE were modelled separately), student satisfaction with LMS, and student benefits (perceived benefits and expected results modelled separately). The first model (M1) includes only the involvement variables (student involvement and instructor involvement).

M1 does not take into account the effect of the flow of IS quality and other IS success factors through the hierarchical system of influences on success as outlined by DeLone and McLean (2003) and illustrated in Fig. 2. Thus, a second model which includes all the antecedent IS success factors shown in Fig. 2 was estimated to identify the effect of involvement on IS success over and above the flow-on effect of LMS quality factors. In a separate stage, students’ individual characteristics (gender, age, discipline area and so on) were added to this second model. Because addition of the individual characteristics did not significantly change either regression coefficients or levels of significance once LMS quality was taken into account, the M2 results reported in Table 8 include both the antecedent IS success factors and individual student characteristics as well as student and instructor involvement.

Table 8 Effects of student and instructor involvement, with effects of covariatesa

Results

Contrary to hypothesis H1a, student involvement affects neither total LMS use nor type and extent of LMS use. This can be seen in both the use and useTE panels and both before (M1) or after (M2) taking other possible influences on total use into account. On the other hand, instructor involvement does affect use. This relationship is seen, however, only when we distinguish between different types of use; it cannot be seen when we look only at the total amount of LMS use overall. Furthermore, the relationship is quite subtle. The M1 column for useTE shows that higher levels of instructor involvement are more likely to be associated with high use for OLD than very low use overall (line 1) or very high use overall (line 2). The same pattern is observed after taking into account all IS quality factors, satisfaction and individual characteristics. This result suggests that instructor involvement plays an important role in encouraging students to use the LMS, but also—and, importantly—in guiding them to use it appropriately; the number of hours very high users reported using the LMS to download materials in particular (median of 12.0 h) appears excessive. Thus, hypothesis H2a is supported, although the nature of the effect of instructor involvement on use is more complex than envisaged.

When satisfaction with LMS is modelled as a simple reflection of involvement (Satisfaction M1 in Table 8), both student involvement and instructor involvement appear to have a small but significant effect, but once the quality variables, use and individual characteristics are included in the model (Satisfaction M2) this slight effect disappears. Student satisfaction with the LMS is more satisfactorily modelled as reflecting only students’ perceptions of LMS quality. This model explains 57.3% of variation in satisfaction whereas student and instructor involvement together could explain only 7.8% of the variance in satisfaction. Thus, hypotheses H1b and H2b are partially supported. Both student and instructor involvement have a small effect on student satisfaction with the LMS, but this effect is masked by the effect of LMS quality.

Involvement provides a good explanation of the perceived benefits of LMS use for student users, but does not affect actual learning as measured by student reports of expected results. The M1 results for student benefits show that, taken together student and instructor involvement explain 39.7% of the variance in students’ reports of the benefits of LMS use for efficiency and learning. Student involvement explains more than twice as much variance as instructor involvement. The effect of student involvement persists once the remaining variables are entered into the model (M2), but instructor involvement no longer has an effect. Satisfaction with LMS works with involvement to influence perceived student benefits. Since satisfaction is itself a function of quality, it appears that quality acts through satisfaction to influence individual outcomes. LMS information quality can also be seen to have a small, direct effect on student benefits in M2. Hypothesis H1c is therefore supported. Student involvement positively affects the benefits to students of LMS use, although the effect is limited to learning efficiency and perceived learning rather than anticipated grades in assessed work.

Hypothesis H2c is partially supported. Instructor involvement can be seen to affect student benefits in terms of efficiency and perceived learning, but this effect is no longer noticeable once the effect of student satisfaction with LMS and, to a lesser extent LMS information quality, is taken into account.

Discussion

In this section, we revisit the results, focusing first on the hypothesised influences of student involvement on LMS success, and then on the hypothesised influences of instructor involvement on success. We begin with the success factors that are most strongly influenced by involvement.

Student involvement has a significant effect on students’ perceptions of the benefits of LMS use. The more involved a student is with the LMS site for a course, the stronger the benefits they report obtaining from use. As measured in this study, these benefits concern perceived learning as well as improvements in the process of study, but perceptions of improved learning were not matched by expectations of improved grades.

On the other hand, student involvement has no effect on LMS use. Total time spent using the LMS was influenced in this study only by experience as a user of computers and LMS. More experienced users need less time to use the LMS, and this effect is particularly seen among students who spend an above average amount of time using the LMS to download materials. In this situation, we should not expect use to increase with student involvement; it may well be independent as we have observed here.

Once LMS quality was taken into account, student involvement had no significant effect on satisfaction with LMS use. Indeed, the three quality variables alone explained a high proportion of the variance in satisfaction, 57.3% (from Table 8). This finding adds an additional level of understanding to the results of studies which have found a relationship between user involvement and satisfaction (Blili et al. 1998; McGill and Klobas 2008). In this field study, the users were very distant from system development and support and had no role in its provision. If satisfaction is derived from quality and students have no role in determining that quality, perhaps we should not expect satisfaction to reflect student involvement, but rather that student involvement acts in other ways, as seen here. We might expect, however, that student involvement would be associated with satisfaction in courses where students contribute to the design of the course work space, contribute materials and discussion, or assist with support, all of which would contribute to satisfaction by contributing to LMS quality.

Turning to instructor involvement, we see that instructor involvement affects the perceived benefits of LMS use, but the effect is small relative to the effect of student involvement and cannot be observed once the influence of user satisfaction and LMS quality are taken into account.

Looking at the intermediate success factors, satisfaction with LMS and LMS use, the only effect of instructor involvement is on use of the LMS for online discussion where stronger instructor involvement is associated with a higher probability of above average LMS use for online discussion. This effect is not seen, however, for above average use of the LMS for materials download. Materials download is a more passive use of the LMS for an instructor than promoting online discussion, and the effect of instructor involvement on LMS use for online discussion suggests that students respond to this qualitative difference in their instructors’ involvement in guiding their use of the LMS. The effect of instructor involvement on LMS use for online discussion is particularly interesting when it is considered alongside the relationship between computer experience and LMS use measured in hours per week. Instructor involvement encourages students to participate in online discussion and this appears to overcome the reluctance that longer term computer users might feel in using the LMS in more routine ways.

The fact that only partial support was obtained for the hypotheses about the effects of involvement on satisfaction with LMS and the effect of instructor involvement on student benefits suggested further investigation of these relationships was required. The information quality in an LMS is likely to reflect the extent of an instructor’s commitment to developing a learning environment that is engaging and informative for users and be reflected in student perceptions of information quality. In addition, because students themselves participate (to a greater or lesser degree, depending on how the LMS is envisaged in the course) in adding material and discussion to the LMS, the LMS may also reflect student involvement. There is, on the other hand, no reason to expect that involvement would affect other quality variables. Thus, in order to further explore the relationship between involvement and LMS success, we tested the effect of involvement on information quality. As shown in Table 9, both instructor involvement and student involvement affect information quality, with instructor involvement having a stronger effect. Neither system quality nor service quality were affected in this case.

Table 9 Effects of student and instructor involvement on LMS quality, standardised coefficients, with effects of covariatesa

In terms of the DeLone and McLean (2003) model of IS success, the results of this study confirm the relationship between LMS quality and satisfaction, and satisfaction and benefits of LMS use. Satisfaction with LMS reflects LMS system quality, information quality and service quality. Student benefits, on the other hand, reflect satisfaction rather than use. The effect of student involvement on student benefits is a particularly important observation indicating that the benefits of use reflect at least two paths, one of them based on the system, its quality and associated satisfaction with use, and the other on the extent to which the individual user is involved in the system, i.e., the extent to which they think it is important and relevant. The most positive benefits are likely to be achieved when the process of use is satisfactory (reflecting quality) and the system is perceived by the user to be important and relevant. Use of a satisfactory system that the individual does not consider to be particularly important or relevant is less likely to result in significant net benefit than a relevant and important system. Similarly, a relevant and important system of low quality (and, thus, low user satisfaction) is less likely to be of benefit than a higher quality system.

The results of this study should be considered alongside two methodological issues. The first is associated with the sampling procedure. By using volunteers who responded soon after receiving the email invitation to participate in this study, there is a possibility that the participants in this study have higher involvement with the LMS than other LMS users. Thus, we caution that the observed effects may be limited to the most highly involved users.

The second methodological issue concerns measurement of instructor involvement. Instructor involvement was estimated in this study by asking students their perception of instructor involvement rather than by direct measurement from instructors. The results obtained by using the indirect measure show that, even though there might be increased error associated with measuring instructor involvement in this way, sufficient information can be obtained to distinguish the effects of instructor involvement from those of student involvement. Nonetheless, a study in which instructor involvement could be matched to student response to a system would provide a stronger test of the relationships tested and observed here.

Conclusions

Involvement is important to LMS success. Student involvement helps students obtain benefits such as improved effectiveness and productivity when studying. Instructor involvement guides appropriate use, both in terms of the nature of use and the extent of use (neither too little nor too much overall). Furthermore, instructor involvement contributes to student benefits by affecting LMS information quality which affects the benefits students say they receive from use through its affect on student satisfaction.

By identifying that involvement affects LMS success, we have shown the value of looking at a wider set of influences on IS success. It is not sufficient to consider satisfaction and extent of use alone if we want to help people develop successful IS (including LMS) that make a difference. In addition to the effect of involvement, this paper confirms the importance of studying the nature of use rather than just the extent of use. Furthermore, the method used in this study showed that individual differences are much less important than involvement and LMS quality in explaining appropriate LMS use, satisfaction and student benefits.