Introduction

The rise of computer technology and the World Wide Web has led to significant changes in the development of learning environments. Nowadays instructors have many opportunities for designing technology-enhanced learning environments. In higher education, the increasing popularity of content management systems (CMSs), such as Blackboard, is an example of this development (Chan et al. 2003). The widespread adoption of CMSs can be ascribed to its features that are claimed to be beneficial for students’ learning. In the first instance, CMSs provide a rich toolset with various learning support as presented in Table 1. This table categorises tools according to the kind of support that is provided for the learning process. In general, three types of tools are distinguished: (1) information tools that structure or elaborate the information that has to be learned; (2) cognitive tools that allow students to process the content deeply; and (3) scaffolding tools that guide students’ learning process (Dabbagh and Bannan-Ritland 2005; Hannafin et al. 1999; Jonassen 1999; Lust et al. 2012). The provision of such a tool set is claimed to stimulate enriched learning because students are supported in various ways in their learning process (Dabbagh and Kitsantas 2005; Nutta 2001).

Table 1 Tool classification scheme

CMSs allow students to control their tool-use (i.e., to decide themselves to select and use tools). Theoretically, this learner control in tool-use is claimed to (a) provide adaptive support (Friend and Cole 1990; Williams 1996), (b) stimulate flexible and self-directed learning (Coates et al. 2005) and (c) raise students’ interest and engagement with the topic (Lepper 1985).

Although these educational arguments in favour of CMSs are widespread, they rest on the assumption that all students are compliant (i.e., use the tools as prescribed). Perkins (1985), however, argued that this assumption is problematic because students are agents who exercise control and choice. Adaptive tool-use, according to Perkins (1985), can be considered a self-regulative strategy that presupposes that a student is able to select and use tools in line with the learning requirements of the learning environment. Therefore, adaptive tool-use implies that a learner can grasp the learning affordances of the available tools, which causes learning gains. Even when tools are objectively functional for the learning task at hand, Perkins (1985) stressed three learner-related conditions for the adaptive use of tools. In particular, adaptive tool-use presupposes that (1) students perceive the tools’ functionalities and the learning task in a similar way as the designers, (2) students are skillful in using the tools and (3) students are motivated to spend effort and time in using the selected tools (Perkins 1985). Adaptive tool-use, thus, cannot be assumed because it presupposes learner-related conditions that not every student masters. Empirical evidence on students’ tool-use within technology-enhanced learning environments supports Perkins’ (1985) argument. Multiple studies that logged the frequency of use (Hoskins and Van Hooff 2005; Hammoud et al. 2008; Bartholomé et al. 2006; Bera and Liu 2006), the duration of use (Clarebout and Elen 2008, 2009; Jiang et al. 2009) and the quality of use (Gerber et al. 2007; Iiyoshi and Hannafin 1998; Oliver and Hannafin 2000) reveal that tools are often neglected or used in other ways than intended. Moreover, multiple studies revealed that these tool-use differences affected students’ performance. In line with Perkins (1985), empirical evidence stresses that adaptive tool-use cannot be assumed (i.e., only a minority of students profit from the learning affordances of the learning environment).

This evidence, however, is mainly restricted to controlled learning environments or learning environments that were uniquely designed for the research purposes (Grabinger 2008). With respect to CMSs in real-life settings, little is known about how students profit from the CMS tools, how they differ in their tool-use and whether these differences affect students’ learning (Lust et al. 2012). The current study addressed these concerns. Moreover, the current study involved students’ tool-use in a CMS-supported course from a temporal perspective (i.e., the moment when tools are used). This temporal perspective has been neglected in research on tool-use, but it seems important to consider. In particular, the tool-use framework of Perkins (1985) suggests that adaptive tool-use is conditional. Specifically, the framework starts from a fixed learning task and theorises the necessary learner conditions in order for tools to support learning. The latter implies that students’ tool-use must change if learning tasks and hence learning requirements are changing. Additionally, in contrast to controlled learning environments, CMSs deal with longer learning episodes that consist of multiple learning phases (Grabinger 2008). These phases are qualitatively different in the learning tasks or learning requirements that are set and consequently in the kind of learning needs students encounter (Perry and Winne 2006). Because CMS tools address different kinds of learning needs as depicted in Table 1, it is highly possible that students’ learning not only depends on the extent to which tools are used, but also on the moment when particular tool-types are used throughout the learning phases of the CMS-supported course. The current study involved investigating this main hypothesis (i.e., whether adaptive tool-use depends on the moment when tools are used throughout the course’s phases).

In order to set up hypotheses about the optimal moment of using a particular tool-type, theories regarding expert development (Anderson 1987, 2000; Schraw 2006) and domain learning (Alexander et al. 1995; Shuell 1990) are applicable given that they stress different phases of long-term cognitive development. In particular, cognitive development evolves throughout three learning phases that differ qualitatively in the kind of learning needs that students encounter (Shuell 1990), the study strategies that students use in order to deal with these learning needs (Alexander et al. 1995; Shuell 1990) and consequently the knowledge that is retrieved (Anderson 2000; Alexander et al. 1995; Schraw 2006; Shuell 1990). In particular, the first learning phase is characterised by a need to become knowledgeable in order to deal with the new information that students encounter (Shuell 1990). In the absence of support, students use superficial learning strategies such as memorising and rehearsing because they lack familiarity with the subject (Alexander 2003; Shuell 1990). Accordingly, students acquire knowledge that is characterised by an accumulated amount of isolated pieces of knowledge (Alexander et al. 1994, 1995; Alexander and Murphy 1998; Ge and Hardé 2010). Empirical evidence supports this first phase. In multiple studies, students reported less sophisticated study strategies when they were confronted with a new domain of knowledge (Alexander et al. 1997; Alexander and Murphy 1998; Alexander et al. 2004; Ge and Hardé 2010; Murphy and Alexander 2002). Because students are novices in the first learning phase, they need learning support that compensates for their lack of domain knowledge and strategies. In particular, using basic information tools such as outlines is most supportive in the first phase because they structure the content. Additionally, using scaffolding tools such as adjunct questions is most supportive in the first phase because they support students in their cognitive and metacognitive processing so that eventually students can perform the activity themselves. The second learning phase is characterised by a need to organise the isolated pieces of knowledge into meaningful knowledge structures (Shuell 1990). Therefore, students use other strategies (i.e., apply this knowledge into new situations, look for applications of the material and hence reflect on the retrieved knowledge) (Alexander 2003, 2004; Shuell 1990). Consequently, this results in knowledge that is organised into meaningful knowledge structures (Alexander et al. 1994, 1995; Alexander and Murphy 1998; Ge and Hardé 2010). Empirical evidence supports this second phase as well and indicates that students reported higher-order strategies such as relating, structuring and elaborating when they had notion of the subject matter (Alexander et al. 1997; Ge and Hardé 2010). Therefore, it is expected that students need another kind of support in the second phase. Exemplary for this expectation is the expertise reversal effect which means that scaffolding tools such as worked-out examples are very effective for novice learners, but they become ineffective when students have some level of expertise (Kalyuga et al. 2003). In contrast to scaffolding tools, students need learning support that induces cognitive and metacognitive processes that are already acquired. Particularly, cognitive tools induce higher-order thinking by providing a means for manipulation (e.g., organising tools), experimenting, (e.g., knowledge modeling tools) and reflection (e.g., communication tool). Additionally, elaborated information tools are most supportive in the second phase as well, because they provide elaborated information (e.g., practical applications) of the course content. In the third phase, knowledge structures become highly integrated and function in an autonomous way (Alexander et al. 1994, 1995; Alexander and Murphy 1998; Ge and Hardé 2010). At this point, students are experts. Therefore, it is reasonable that using CMS tools become contra-productive. For the current study, that focused on a CMS within an undergraduate course, it is questionable whether this level of expertise was reached. In this respect, we refer to a study of Alexander et al. (1997) in which only the first two phases of cognitive development were revealed within an undergraduate course.

Hence, the phases in long-term learning, such as in a CMS-supported course, suggest that adaptive tool-use depends on the moment tools are used in relation to the learning phase in which they are used. As mentioned previously, however, it is unclear how students use the CMS tools, whether they differ in the moment CMS tools are used and whether this moment of use is important for students’ learning. Our study addressed these issues by investigating the following two research questions:

  1. 1.

    How do students differ in the moment when CMS tools are used throughout the course? Can different users groups be distinguished?

  2. 2.

    Does the moment when a CMS tool is used affect students’ learning for the course?

Method

Participants

Participants were 158 of the 175 first-year Educational Sciences undergraduates (90 %) at the KU Leuven. There were 152 woman and 6 men. Most of the students were 18 years old (73.2 %). The distributions of gender and age represent the demographics of the whole cohort of 175 students and are typical for Flemish Educational Sciences courses.

The blended course unit

Content

At the University of Leuven, “Learning and Instruction” is a theoretical Bachelor course at the Department of Educational Sciences that gives an introduction into the field of instructional sciences. The course content consisted of two parts: a theoretical introduction to the main concepts; and an expansion of these concepts. The last part consisted out of different scientific contributions that relate to the main concepts (i.e., they apply the main concepts). The course content was provided through a paper syllabus.

Learning tasks

Students’ learning was assessed by an examination and an assignment. The examination consisted of three parts that measured different aspects of students’ learning. The first part contained items for which students had to reproduce their knowledge of the course content, and hence these items were labeled ‘factual items’. In terms of Bloom’s revised taxonomy (Anderson and Krathwohl 2001), these factual items required students to retrieve or to remember the basic concepts of the course. For instance, the item “How do learning processes differ from biological processes such as the development of the lungs?” required student to remember the general definition of learning. The second part consisted of items that forced students to relate different aspects of the course content to each other in order to construct meaning of the course content. In terms of Bloom’s revised taxonomy (Anderson and Krathwohl 2001), these items required students to understand or to comprehend the course content. Therefore these items were labeled ‘comprehension items’. For instance, students got a mind-map for which they had to fill in the main concept that clusters all the concepts in the mind-map. This assignment required students to relate the different concepts to each other and hence to understand the underlying structures/connections between the different concepts. The items in the last part of the examination required students to interpret specific situations in terms to the course content. In terms of Bloom’s taxonomy (Anderson and Krathwohl 2001), these items required students to apply the course content to particular cases. Hence these items were labeled the ‘application items’. For example, students were given the following case “The teacher corrects Rose’s homework. Rose got a wrong answer. Instead of grading this wrong answer, the teacher calls Rose and asks her to explain why she gave this solution and how she reached this solution”. Students were then asked what kind of learning theory the teacher applied and why. This case required students to apply their content knowledge to daily cases. The assignment consisted out of the following proposition: Good education implies the use of active didactical methods” about which students had to argue. For this assignment, students had to apply their content knowledge into critical arguments regarding their agreement or disagreement with the proposition. In terms of Bloom’s revised taxonomy (Anderson and Krathwohl 2001), this assignment forced students to present and defend their argument regarding the proposition by making judgements about the course content, by evaluating the validity of different ideas or theories, and by using these evaluations in order to build their argument.

Learning phases

The learning tasks required students to comprehend the course content, to apply it to actual learning situations and to critically reflect on it. Hence, the course expectation is that students progress through the first two learning phases, namely, the initial and the intermediate phase. The first two learning phases can thus be applied to the course. The initial phase encompassed the period when the main concepts were introduced through the lectures (February–April). This phase required students to retrieve and comprehend the main concepts as measured by the factual and the comprehension items of the examination. The intermediate phase encompassed the period wherein the main concepts were elaborated (May–June). This phase required students to apply and to critically reflect on the main concepts as measured by the assignment and the application items of the examination.

Tools

Additional to the syllabus, a series of lectures, a CMS and a team of support staff were at students’ disposal. The support staff organised three learning support sessions that students could attend voluntarily. These sessions provided conceptual scaffolding support (i.e., they rehearsed particular parts of the course content and hence supported students in dealing with the course content).

The CMS was designed using Blackboard (version 9). Access to and the use of the learning environment were under students’ control. A variety of tools, or supportive elements, were available on the CMS. First of all, there was administrative information about the course (e.g., course information, announcements and planning). Secondly, different basic information tools were available (i.e., course material outlines and web lectures). These information tools provided the content in a structured way and thus supported students’ information-retrieving needs. Thirdly, elaborated information tools were available as well (i.e., web links). These information tools provided different content applications and hence supported students in applying the course content. Fourthly, two cognitive knowledge-modeling tools were available (i.e., practice quizzes that allowed students to reflect on their knowledge and on the course requirements). Fifth, there was an opportunity for collaboration and communication with peers, instructor and the course content (e.g., discussion board). Finally, there were some conceptual and metacognitive scaffolds (e.g., study tips, feedback on practice quizzes) that guided students’ attention and gave metacognitive feedback with respect to students’ study strategies.

Most of the tools were provided at the beginning of the semester except for the course material outlines, the web lectures and the practice quizzes. The course material outlines and the web lectures were provided each week, the outlines were provided before the specific lecture, and the web lectures were provided afterwards. Because the practice quizzes were online for a fixed period of 2 weeks, they were not included into this study.

Measurement instruments

Tool-use

Students’ use of the CMS tools was captured through log files logging students’ actions in the CMS course starting from February (1) until June (5). In this way, the study captured the moment when tools were used. For an overview of the variables and their operationalisation, see Table 2. Where possible, multiple log indicators were used in order to capture the way in which students used these CMS tools (see review by Lust et al. 2012). In most cases, the frequency of access was logged. The frequency measure is an indicator of the amount of attention that students give to a particular tool. For the web lectures, the duration of use was also logged. This duration measure is an indicator of the energy that students expended in using the tool (i.e., the intensity of their tool-use). As for the other tools (i.e., course outlines, learning support and web links), it was not possible to log the duration of use accurately. In particular, because the course outlines and the learning support could be downloaded and reused offline, duration measures were only accurate for the students who used these tools online. The web links, on the other hand, directed students to the other web sites. From that moment onwards, it was unclear what students did. For the discussion board, the number of messages read and messages posted was logged. In line with the research of Hoskins and Van Hooff (2005), a distinction was made between passive users (students who only read messages) and active users (students who also post and thus actively contribute to the discussion).

Table 2 Students’ tool use variables

Performance

Students’ performance was measured by four indicators representing students’ achievement on the factual items, comprehension items, application items and assignment. Additionally, students’ overall grade (sum of the four indicators) was considered.

Analysis

As an initial exploration, we firstly investigated the average tool-use trends throughout the CMS supported course. Therefore, the mean values for each tool were calculated for each month of the course. This was done by counting the values for each week and dividing this sum by the number of weeks within each month. Based on these average values, repeated-measures analyses were executed with month as within-subject factor in order to investigate if CMS tools were used differently throughout the course. These average trends already gave an indication of the way in which different tool types were used throughout the course.

The first research question addressed possible student differences in these average trends. Particularly, we investigated whether students differed significantly in the moment when CMS tools were used throughout the learning phases of the course. In order to address this research question, different groups of users were created based on how students used the tools throughout the two learning phases. In this respect, the study used a similar approach as Knight (2010). Firstly, the percentage change was calculated for each student for each tool between three moments in the course (i.e., February, April and June). These three moments relate to the learning phases of the course. The first learning phase encompasses the period between February and April, whereas the second learning phase encompasses the period between May and June. This resulted in two new variables for each tool (i.e., period 1 as the percentage change within the first learning phase and period 2 as the percentage change within the second learning phase). In a second instance, and similar to Knight (2010), students were assigned for each tool tool to one of the following groups: non-users, late users, early users and constant users. These groups differed in the moment when tools were mainly used. Consequently, assignment to one of these four groups was based on the percentage values and the timing of increase or decrease. Table 3 depicts the different user groups and the categorisation rules that were used. A remark can be given with respect to the assignment of constant users. In an ideal situation these users have 0 for each period, indicative of no change. However, this was rarely found in the data. In line with Knight (2010), a margin was set up to 30, this value is based on the distribution of period 1 and period 2. This resulted in a new variable, ‘trend group’, for each CMS tool.

Table 3 Categorisation rules for user groups

Finally, in order to investigate if students differed significantly in the moment when a tool was used, multivariate analyses of variance were executed for each tool. In particular, a series of multivariate analyses involved the variable ‘trend group’ as the independent variable and the percentage changes during the two periods as the dependent variables.

The second research question addressed our main interest (i.e., whether the moment of using particular CMS tools) affects students’ learning? In this way, we investigated whether adaptive tool-use within a CMS-supported course depends also on the moment when CMS tools were used. For only the CMS tools for which trend groups significantly differed (cf. first research question), we conducted a multivariate analysis of variance with trend group as the independent variable and the performance variables as dependent variables. Because the variances of the dependent variables were all homogeneous, LSD post hoc tests were used in order to interpret the performance effects.

Results

Initial exploration: average tool-use trends throughout the course

At first, we explored how tools were on average used throughout the CMS-supported course. Therefore, average values were calculated for each month of the course as depicted in the methods section. In order to find out whether tools were used differently throughout the course, repeated-measured analyses were executed. For each CMS tool as depicted in Table 2, a one-way repeated-measures analysis, with average use per month as the within-subject factor, was performed. Results in Table 4 reveal that except for posting messages on the discussion board, Wilksλ = 0.98, F(4,154) = 0.89, p = 0.47, η 2 = 0.02, students’ use of the other CMS tools changed significantly throughout the semester. Therefore posting messages was excluded for the further analyses.

Table 4 Students’ tool-use throughout the semester: repeated-measures results and descriptive statistics

Descriptive information as illustrated in Figs. 1, 2 and 3, reveal that students’ general use of CMS peaked in March and May. Only students’ use of the web links followed this trend. Except for accessing the web lectures and the scaffolding tools, students’ use of the other CMS tools increased in April with a peak in May as Figs. 2 and 3 illustrate. Using the scaffold tool and accessing the web-lectures decreased gradually.

Fig. 1
figure 1

General use throughout the semester

Fig. 2
figure 2

Use of CMS tools throughout the semester

Fig. 3
figure 3

Use of web lectures (seconds) throughout the semester

Research Question 1: How do students differ in the moment when CMS tools are used throughout the course?

The first research question was whether students differed in terms of these average tool-use trends as sketched above. Indicative in this respect were the large standard deviations in Table 4 because they reveal that these average tool-use trends hide variability in the way in which individual students used tools throughout the semester. In order to look for student differences in tool-use trends, different groups of users were created based on students’ percentage change of tool-use throughout the two learning phases of the course (see “Method” section). As in Knight’s (2010) study, students were assigned to one of the following groups: non-users, early users, late users and constant users. This resulted in the new variable of ‘trend group’ for each tool that reflected how students differed in the moment when they used the CMS tool. Table 5 depicts the ‘trend group’ variables for each CMS tool and presents the distribution of students among the categories of this variable. Table 5 illustrates that the percentage of non-users was very high for almost all CMS tools. The percentage of constant users, on the other hand, was very low, except for the course material outlines and general use.

Table 5 Distribution of trend groups

In order to investigate if students differed significantly in terms of the moment when CMS tools were used, a series of multivariate analyses of variance were executed. For each tool, a multivariate analysis of variance, with percentage change in the two learning phases as the dependent variables and the trend group as independent variable, was performed. Only for general CMS use, Wilks’λ = 0.32, F(10, 298) = 23.32, p < 0.001, η 2 = 0.44, the course material outlines, Wilks’ λ = 0.34, F(4,218) = 39.40, p < 0.001, η 2 = 0.42, the weblectures (hits), Wilks’ λ = 0.51, F(6,52) = 3.45, p < 0.001, η 2 = 0.28, the web links, Wilks’ λ = 0.04, F(4, 306) = 303.16, p < 0.001, η 2 = 0.80 and the messages_read, Wilks’ λ = 0.46, F(4, 306) = 36.78, p < 0.001, η 2 = .33, the trend groups significantly differed in their tool-use throughout the two learning phases. Hence, only these tools were used in the further analysis.

Research Question 2: Does the moment when a CMS tool is used affect students’ learning for the course?

Table 6 reports the performance scores for the tools for which trend groups significantly differed (cf supra). As Table 6 illustrates, performance scores changed depending on the moment when the tool was used. In order to find out if these differences were statistically significant, multivariate analyses of variance were carried out with the performance indicators as dependent variables and students’ use of a specific CMS tool as the independent variable. The moment when students used the course material outlines, Wilks’ λ = 0.79, F(12, 400) = 3.07, p < 0.00, η 2 = 0.08, and read messages on the discussion board, Wilks’ λ = 0.86, F(10, 302) = 2.31, p = 0.01, η 2 = 0.07, influenced their performance significantly.

Table 6 Performance effects

With respect to the course material outlines, a significant influence was found for performance on the assignment, F(3, 154) = 7.61, p = 0.00, η 2 = 0.13, on the factual items, F(3, 154) = 6.4 p = 0.00, η 2 = 0.11 and for the total, F(3, 154) = 5.12, p = 0.00, η 2 = 0.09. As for the assignment, post hoc comparisons reveal that no-users performed significantly worse than late, early and constant users. Late users performed significantly worse than constant and early users. As for performance on the factual items, post hoc comparisons revealed that no-users performed significantly worse than early and constant users. Late users performed significantly worse on the factual items than early users. For the total, no-users performed significantly worse than early and constant users.

With regard to the messages read, a significant influence was found for performance on the factual items, F(2, 155) = 4.80, p < 0.05, η 2 = 0.06, on the comprehension items, F(2, 155) = 4.92, p < 0.01, η 2 = 0.06 and for the total, F(2, 155) = 3.78, p < 0.05, η 2 = 0.05. Post hoc comparisons revealed that early users performed significantly worse than late and no-users on the factual items. For the comprehension items, late users performed significantly better than no-users and early users. For the total, late users performed significantly better than no-users and early users.

Discussion

Although CMSs are widespread in today’s higher education, little is known about how students profit from the educational opportunities provided through the CMS tools. The current study addressed this concern from a temporal perspective (i.e., by looking at the moment tools are used). This perspective was found to be neglected in current tool-use research but seems necessary given the different learning phases within a long learning episode, together with the different tool functionalities. Therefore, it is highly plausible that students’ learning is not only affected by the extent to which CMS tools are used, but also by the moment particular CMS tools are used throughout the course. The current study addressed this main hypothesis.

On average, CMS tools were used differently throughout the course. Interestingly these trends were different dependent on the particular tool. Students’ general CMS use fluctuated with a peak in March and May. Hence, students used the CMS in the first learning phase, the period between February and April, and in the second learning phase, the period between May and June. The web lectures (frequency), the scaffolding tools and the weblinks were more frequently accessed in the first learning phase. In contrast to the course outlines, the web lectures (duration) and the messages read were more frequently used in the second learning phase. This tool-use dynamism suggests that students’ tool-use was on average strategic (i.e., students decided to use different tool types at different moments within the course).

Nevertheless, students deviated from these average trends. Except for the scaffolding tools and the web lectures (duration), at least three different student groups could be retrieved that reflect significant differences in the moment when tools were used. The early users were students who used the particular tool mainly in the first learning phase (Knight 2010). In line with Shuell (1990) and Alexander (2003), this phase introduced students in the domain of ‘learning and instruction’ and required them to retrieve and comprehend the basic information. The late users, on the other hand, were students who used the particular tool mainly at the end of the course (i.e., in the second learning phase) (Knight 2010). Consistent with Shuell (1990) and Alexander (2004), this phase elaborated the main concepts of ‘learning and instruction’ and required students to critically reflect on and to apply these concepts. A last group that could be retrieved were the no-users, who did not use the tools in the first and the second learning phase. Basically, these different groups contradicted the assumption that is often made in instructional design (i.e., that instructional interventions such as CMS tools elicit similar processing strategies) (Winne 2004). In addition to current evidence on CMSs, the current study reveals that students differ not only in the extent to which tools are used, but also in terms of the moment when these tools are used.

Interestingly, a majority of students were no-users for almost all CMS tools, which indicates that the tools were on average neglected by students. This lack of tool-use is in line with previous evidence in the context of CMSs (Hammoud et al. 2008; Huon et al. 2007; Macfayden and Dawson 2010) and in other controlled learning environments (Beal et al. 2008; Clarebout and Elen 2009; Lumpe and Butler 2002). It seems thus that a majority of students did not notice the learning opportunities of the CMS tools, despite their functionality for students’ learning process. Going back to Perkins’ (1985) conditions, this lack of tool-use can possibly reflect (a) students’ misconceptions regarding the learning requirements, the tools’ functionalities and the relation between them, (b) students’ lack of skills in using these tools adequately or (c) students’ lack of motivation to expend effort and time in using these tools. Despite their objective functionality for the course’s goals, it is possible that the available tools were not easy to use. This issue relates to tool design which possibly could be another reason for the huge amount of no-users.

For the course outlines and the web lectures (frequency), a fourth group of users could be retained (i.e., constant users). These users remained more or less constant in their tool-use throughout the two learning phases of the course. At first sight, this constant use could reflect that students missed the cues in the learning environment in order to adapt their tool-use to the changing learning requirements. In particular, students did not realise that the second learning phase required students to go a step further (i.e., to decrease the use of basic information tools and to increase the use of cognitive tools in line with the course’s phases). Winne (2006) refers to a production deficiency, as initially introduced by Flavell (1970), for those instances when students are not able to attend to a learning context in which learning support would be necessary. Besides a production deficiency, however, it is also possible that these constant users remained constant in their use of basic information tools because they needed basic information support within the second learning phase as well. After all, learning is not necessarily a linear process as conceptualised in the current study. Possibly, these students, although they remained constant in their use of the basic information tools, adapted their use of the cognitive and elaborated information tools throughout the two learning phases. The latter (i.e., students’ tool choice and the way in which they use them), remains unclear because the current study focused on students’ use of each CMS tool separately.

In line with the study’s expectations, these temporal trends were important to consider. Moreover, because the moment when students used the course material outlines and the moment when students read messages had a significant effect on students’ performance. These distinct performance effects imply that not all students profited from the CMS affordances because of wrong timing. Hence, it seems that adaptive tool-use is conditional (i.e., selecting and using tools depends on contextual conditions that shape different learning needs). In particular, using the course outlines mainly at the end of the learning episode (i.e., late use) is disruptive for learning in comparison with early and constant use. Therefore, in line with expectations, our results stress that outlines are most supportive for students’ learning when students are novices who are confronted with a new domain of knowledge (cf the first learning phase). Moreover, because these outlines structure information that has to be learned and hence support students in becoming knowledgeable, which is a learning need that characterises the first phase (Alexander 2004; Shuell 1990). Furthermore, reading the discussion board at the end of the learning episode (i.e., late use) was most beneficial for students’ learning as well. This result is in line with the study’s expectations. In particular, it reveals that cognitive tool-use such as a discussion-board is more adaptive for learning when students already have some notion of the subject matter (i.e., the second learning phase) (Alexander 2004; Shuell 1990). Moreover, because a discussion board provides a means for discussing the course content (Costen 2009), which induces students’ higher order thinking, which is a learning need that characterises the second phase (Alexander et al. 1994; Shuell 1990).

Although students differed significantly in the moment when they used the web links and web lectures, no significant performance effects were revealed. As for the web-lectures, this result is surprising because they have the same functionality as the course outlines (Lust et al. 2012). It is possible, however, that not all students accessed the web lectures in order to retrieve knowledge. In line with previous evidence on students’ use of web lectures, it is possible that some students used them merely as a study aid in order to review a specific topic, or to prepare for an examination or an assignment (Acharya 2003; Brotherton and Abowd 2004; Traphagan et al. 2010). This underlines the importance of considering students’ metacognitive knowledge regarding the tools (Lowyck et al. 2004; Perkins 1985) because tools and their functionality are not always perceived by learners in a similar way as designers (Doyle 1977). In line with the learning phases (Alexander 2004; Shuell 1990), it was expected that late users of the web links would perform significantly better than early users. Moreover, web links provide elaborated information about the course content, which is a functionality that relates to the learning needs of the second phase (Alexander 2004; Shuell 1990). In line with expectations, descriptive statistics revealed that late users performed better than early users, although these performance differences were not statistically significant. The fact that the number of early users (19 %) and late (6 %) users was low can be a possible reason for this lack of a performance effect.

The latter relates to a limitation of the current study. Only a minority of students used the available CMS tools, which introduced a bias in finding trend groups that significantly differed. Moreover, the fact that the number of users was so small is possibly a limitation for the performance analyses because it is hard to compare very small groups in an adequate way. On the other hand, this result is also important for instructional design research and calls for more research. The fact that only a minority of students accessed the CMS tools, despite their widespread claims in terms of learning, suggests a need for more research on the determinants of students’ tool-use within CMSs. In line with Perkins’ (1985) conditions on adaptive tool-use, it would be interesting to investigate the influence of these conditions on students’ tool-use. In particular, future research could focus on whether students’ ideas regarding the tools’ functionality in relation to the course goals, as well as students’ motivation towards these course goals, influences students’ tool-use decisions. Moreover, whether students’ tool-use skills affect students’ tool-use could be investigated. In order to deal with these questions, mixed-methods research is needed. In particular, questionnaires could give insight into students’ instructional ideas and motivation, whereas unobtrusive measurement techniques, such as log data, could indicate students’ tool-use skills and their actual tool-use. Future research should address these influencing variables in order to gain insight into the profiles of students who do not profit from the available CMS learning opportunities. This would give cues about adapting CMSs to these kinds of students so that their tool-use and eventually their learning are enhanced. In addition to the large amount of no-users, the use of some CMS tools (e.g., web-links) was so low that it is questionable if the ‘users’ used the tools in a meaningful way. Although the study captured students’ tool-use through unobtrusive measurement techniques (i.e., log files), in comparison with self-reported techniques, the study captured students’ tool-use mainly by logging the frequency of access. These frequency logs are a second limitation because they merely measure students’ attention to the available tools. Therefore it remains hence unclear as to how these tools were used. Future research needs to consider other log indicators in order to gain a fuller insight into the way in which tools were used. For example, it would be interesting to look for ways in order to capture ‘duration of use’ more accurately. The latter possibly could suggest some design-related adaptations to the course (e.g., material that is not downloadable). As a third limitation, the study did not gain insight into students’ use of face-to-face tools (i.e., the learning support sessions). The latter is a serious limitation of the current study because these sessions had a similar functionality as the online learning support. Consequently, it is possible that students did not use the online learning support because they used the learning support sessions and vice versa. This tool choice remains unclear. Future research hence needs to consider the available face-to-face tools within a blended learning environment. A final limitation relates to the generalisability of the study’s findings. The current study was conducted with a specific population (i.e., female undergraduate students studying educational sciences in a single course). Therefore, the external validity of the results is limited. It would be interesting in future research to study multiple cohorts of students and/or observe a single group of students throughout multiple courses.

Despite these limitations, some interesting insights for instructional design were obtained. The high number of non-users raises questions about the amount of learner control that has to be provided within CMS-supported courses, especially because the effectiveness of a learning environment seems to depend heavily on students’ adaptive tool-use (Hannafin et al. 1999). In this respect, the current study highlights that not only students’ quantitative and qualitative tool-use, but also the moment of use, have to be considered when instructional designers aim to enhance students’ adaptive tool-use. Specifically, the current study provides an empirical basis for releasing some types of tools depending on the learning phase in a particular course.