Keywords

1 Introduction

The importance of information literacy in higher education is widely emphasized [1]. Nevertheless, information literacy instruction is often not well integrated into curricula and is mostly provided in one-shot-sessions which are insufficient to convey information literacy comprehensively [2, 3]. Thus, students usually acquire most of their corresponding knowledge and skills unsystematically during informal processes of learning in practice without being guided by library staff or faculty. As a consequence, they vary greatly in their levels of information literacy. These individual differences have to be taken into account when designing information literacy programs. An adaptable approach to instruction seems most suitable for this purpose because it implies that learners are given the opportunity to adjust the learning materials to their individual competencies and deficiencies [4]. These adjustments may be supported by recommendations which are tailored to learners’ current level of competence. The paper illustrates the potential usefulness of adaptable instruction by presenting a blended learning training of scholarly information literacy tailored to the domain of psychology.Footnote 1 It aims at providing evidence that the training is effective overall and that learning achievements are associated with (a) the adherence to individualized recommendations of adaptable online materials which are provided based on a test of prior knowledge, and (b) subjective evaluations of the course.

1.1 Adapting Instruction to Individual Differences

Individual differences in learners’ competencies and preferences constitute a major challenge in teaching [5]. Research has provided ample evidence for intervention effects dependent on interactions of individual characteristics and treatment variables [6]. As a consequence, it has been suggested that instruction should be delivered in a differentiated or personalized fashion [7]. Regarding the way differentiated instruction is offered, two approaches may be distinguished [8]. In adaptive instruction, teaching is individualized by the instructor or the learning environment (for example, an online learning management system) based on information about the learner, such as age, choices the learners made when interacting with the system, or learners’ past performance. In contrast, adaptable instruction permits learners to control the learning process, for example by choosing among the materials and tasks according to individual competencies or needs (“learning on demand”; [9]). Adaptable instruction is assumed to have numerous benefits compared to adaptive instruction [4, 10]: As learners are given personal control over learning, they are expected to develop feelings of self-efficacy and, by monitoring and adjusting the learning process, to become more effective self-regulated learners. However, adaptable instruction does not always lead to learning benefits. According to a meta-analysis, including learner control within educational technology produced near zero effects [11]. One of the reasons for this finding lies in the fact that learners differ in their ability to assess their individual competence levels and to set adequate learning goals. Especially learners with little prior knowledge are often unaware of their deficits [12]. Therefore, it has been suggested to support the choice of learning materials by recommendations based on pretests [4]. The finding that especially less advanced learners benefit the most from recommendations [13] seems to emphasize this assumption. Providing individual recommendations is particularly relevant for information literacy instruction as students often have inadequate information literacy skills [14] and grossly overestimate their level of prior knowledge [15]. Thus, they will have problems selecting learning contents appropriately without support [2].

When designing an adaptable approach to information literacy instruction, two challenges have to be mastered: First, recommendations have to be adequately tailored to participants’ characteristics. This task is by no means trivial: It may difficult to select characteristics that are most relevant for success and to assess the individual level of these characteristics [16]. Second, care must be taken that learners do not fail to follow these recommendations, for example, because they prefer to make decisions based on individual interests instead of competencies [17]. There is reason to assume that adherence to recommendations goes along with positive subjective evaluations of instruction which indicate that the program meets the participants’ needs and interests. These positive evaluations should in turn be associated with more learning gains. For example, it was found that high achievers in a blended learning course were more satisfied with the blended format and reported to have learned better [18].

1.2 Hypotheses

Based on previous work reported above, the following hypotheses are tested:

  • Hypothesis 1: The blended learning training is effective overall, yielding gains on knowledge tests as well as an information literacy self-efficacy scale.

  • Hypothesis 2: Training gains are associated with more positive subjective evaluations of the training.

  • Hypothesis 3: Participants who follow the recommendations for the materials will achieve larger training gains than participants who omit recommended materials.

2 Methods

2.1 Design and Participants

The hypotheses were tested in a field study with a pretest-posttest design. Participants were N = 64 psychology students (n = 31 bachelor level, and n = 33 master level; M = 24.97 years, 87.5 % female) from the University of Trier, Germany. Participation in the study was voluntary; participants were paid for their participation in the data collection sessions but not for their completion of the training materials.

2.2 Intervention

The domain-specific blended learning training BLInk (“Blended learning of information literacy”Footnote 2) includes online materials and a classroom seminar. The selection of contents is based on the psychology-specific information literacy standards of the Association of College and Research Libraries (ACRL) [19]. Most of the content is imparted by the online materials. The materials are allocated to eight chapters. The majority of them refer to scholarly information searching, such as using reference databases like PsycINFOTM and PSYNDEXTM or web search engines like Google Scholar. Others deal with the evaluation of scholarly publications based on general as well as domain-specific quality criteria. Each chapter is preceded by an advanced organizer and contains textual materials with screenshots, videos, and presentations. Exercises prompt participants to conduct literature searches on individually relevant topics. By this means, students are encouraged to apply their newly acquired skills, for example to search publications relevant for a term paper, the bachelors’ or masters’ thesis. At the end of each chapter, participants have the opportunity to check their knowledge by answering a self-assessment test.

The classroom seminar is designed to integrate and reflect the online materials. It includes additional hands-on exercises related to the participants’ individually relevant search topics. Furthermore, it provides room for critical discussions, for example about the nature of psychology as a science and about the possibilities and limits of web search engines and reference databases.

2.3 Measures and Procedure

In the pretest session, data was collected in small groups with 8 to 14 participants in a computer lab at the University of Trier. Participants completed three measures of information literacy via online survey software: A test of declarative knowledge about scholarly information search and evaluation (extended and revised version of the test published in [20]), the Procedural Information Literacy Knowledge Test – Psychology version (PIKE-P; [21]), and an information literacy self-efficacy scale. The declarative knowledge test is a fixed-choice test which contains 50 items. For each item, three response options are provided. Participants are instructed to mark all response options that are correct. Total scores may vary from 0 to 1. In the current study, this test was used to derive individual recommendations concerning the chapters of the online materials that should be completed. The PIKE-P is a situational judgment test containing 22 items. Each item gives a short description of a situation requiring an information search. This description is followed by four response options. All options are rated on a 5-point Likert-Scale for their usefulness in the given situation. Scoring is based on a scoring key derived from expert ratings (for a detailed description see [20]). Total scores may vary from 0 to 1. Satisfying reliabilities of the test have been reported. High correlations (r > .60) between test scores and performance in standardized information search tasks point to its validity. Information literacy self-efficacy was assessed by a ten item scale developed by the authors. Each item (sample item: “I know how to use bibliographic databases to find relevant references.”) is answered on a 5-point Likert-Scale. In previous studies, satisfactory internal consistencies were found [22].

Following the pretest, each participant received an email with individual recommendations concerning the chapters of the online learning materials to be completed. Recommendations were based on participants’ scores on the declarative knowledge test. For this purpose, the test items were assigned to the eight chapters of the online materials. Each chapter was represented by at least five items. If a participant achieved less than 66 % of the maximum test score per chapter, the recommendation was given to work on that chapter. If 66 % or more were achieved, the chapter was marked as “optional”. During the online learning phase (see “Sect. 2.2 Intervention”), participants were given four days to work on the online materials. They had access to all materials, including “recommended” as well as “optional” chapters, on the learning platform Moodle. Log files were recorded to monitor the participants’ online activities and to check whether they followed the individual recommendations. According to additional self-reports, participants spent between three and seven hours working on the online materials. In the subsequent face-to-face learning phase, each small group of participants attended a 150 min classroom seminar which was held by a faculty member with considerable teaching experience and a student assistant.

The posttest took place two days after the classroom session. The small groups of participants again completed the pretest instruments in a computer lab. Additionally, the Inventory for the Evaluation of Blended Learning (IEBL; [23]) was used to assess subjective evaluations of the course. The IEBL comprises 8 subscales with a total of 46 items. Most items are to be rated on a 7-point Likert scale. Three subscales are used in this paper because of their particular relevance for its objectives: “General usefulness of the course” (6 items; sample item: “I learned something meaningful and important.”), “Acceptance of online teaching” (5 items; sample item: “It seems reasonable to offer online materials instead of conveying content exclusively in classroom sessions.”), and “Acceptance of classroom teaching” (5 items; sample item: “By the classroom session, my understanding of learning content is consolidated.”).

3 Results

Internal consistencies of all measures reached at least satisfactory levels at pretest as well as posttest (see Table 1). To test the hypothesis that participation in the training increases information literacy, t-tests for dependent samples were performed. The results are in line with Hypothesis 1, corroborating highly significant training effects on both knowledge tests as well as the self-efficacy scale.

Table 1. Mean scores, standard deviations, results of the t-test for dependent samples, and internal consistencies of the dependent measures.

For further analyses of training effects, residualized gain scores [24] were estimated by regressing the posttest scores on the pretest scores and computing the difference between observed and predicted values. Compared to simple difference scores (posttest – pretest), the advantage of these scores is that they are (by definition) independent of the level of information literacy prior to training. Residualized gain scores are relative measures, representing deviations from the average change within the sample: Negative values indicate that a group’s score changed less than average, while positive values indicate that a score changed more than average.

To test Hypothesis 2, which refers to associations between relative learning gains and subjective evaluations of training on the IEBL scales, Pearson correlation coefficients were computed (see Table 2). Residualized gain scores on the PIKE-P and the self-efficacy scale corresponded with subjective usefulness of the course, suggesting that participants who learned more were more positive about the value of the course for their further studies. Additionally, gain scores were correlated with more positive evaluations of the online materials but not with evaluations of the classroom seminar. Accordingly, acceptance of the online elements seems to be particularly relevant for training effects.

Table 2. Intercorrelations of training effects (residualized gain scores RES) and subjective evaluations of training (absolute scores).

To test Hypothesis 3, participants were divided into three groups based on the analyses of the Moodle log files: About 17 % of the participants had worked on fewer online chapters than individually recommended (group 1, n = 11), while 33 % had exactly followed the recommendations (group 2, n = 21), and 42 % had additionally worked on at least one of the optional chapters (group 3, n = 27).Footnote 3 The results of the one-factorial ANOVA with planned contrasts of means (see Table 3) demonstrate that the efficiency of information literacy instruction was increased by using adaptable online materials: Group 2 did not differ from group 3 in the gain scores on both knowledge tests and the self-efficacy scale while they outperformed students in group 1 who failed to follow the recommendations (as proposed in Hypothesis 3).

Table 3. Means and standard deviations of training effects and subjective evaluations of training in participant groups with different levels of adherence to study recommendations.

Comparisons of the three groups on the IEBL revealed that group 1 was more critical about the online materials but not about the usefulness of the training and the classroom seminar. Additionally, group 3 surpassed group 2 in ascribing usefulness to the training. Finally, concerning the subjective evaluations, it should be noted that possible scores range from 1 to 7 and a score of 4 corresponds with “neutral” evaluations. Accordingly, all groups valued the course well above the theoretical mean score of the scale.

4 Conclusions

The findings corroborate that the adaptable information literacy training presented in this paper is effective: Participation increased knowledge about scholarly information searching and evaluation as well as information literacy self-efficacy in psychology students, and participants were generally positive about the usefulness of the course. It is particularly important to stress that training gains were associated with adherence to recommendations of online materials: Participants following the recommendations (group 2) gained more from the course than those students in group 1 who failed to work on all recommended chapters. These findings demonstrate that omissions of recommended online materials could not be compensated by taking part in the classroom session. The causes for participants’ non-adherence might be investigated in additional studies. However, subjective evaluations give reason to assume that participants in this group were more critical about online teaching. These students might have been overtaxed by the self-regulatory demands of online learning. Additional analyses showed that they also scored lower than both other groups on all pre-test measures. Thus, a possible interpretation is that they are (at least with regard to information literacy) generally low-performing students who might benefit from additional support during the online phase of the course or need alternative forms of information literacy instruction to optimize their learning achievements.

Furthermore, working on additional materials (as observed in group 3) was associated with higher subjective usefulness, but did not increase training effects. Thus, recommendations were adequately tailored to the participants’ individual level of prior knowledge. The adequacy of recommendations is also documented by the high level of compliance: About 75 % of the participants completed all materials recommended or even worked on more chapters. It may be argued that participants perceived the feedback provided based on their knowledge test scores as valid. This might have, in turn, increased their motivation to learn [25].

These conclusions are, however, tempered by several limitations: The training was domain-specific and tailored to field of psychology, and the study only comprises a small and possibly selective sample of predominantly female students who are possibly particularly interested in developing their information literacy skills. In addition, participants were paid for completing the evaluation assessments which may have biased their learning behaviors as well as their evaluations of the course. Therefore, the positive results may not easily be generalized and should be replicated in further studies.

Particular attention should be paid to the replication of results across domains and contexts. The conceptualization of information literacy as a set of “generic skills” must be questioned in the light of empirical findings that revealed qualitatively different conceptions of information literacy in different domains [26]. In addition, model-based skill decompositions point to differences between scholarly disciplines like psychology (as an empirical, “soft” science) and computer sciences (as a “hard” science) with regard to the subskills relevant for information seeking [27].

Notwithstanding these limitations, the findings may be useful for practitioners, instructors and teachers. They corroborate that assessments of prior knowledge allow for individualized recommendations which increase the efficiency of adaptable literacy instruction. However, care must be taken to identify participants with low acceptance of online teaching who are “at risk” of not complying with the recommendations and therefore will not make the most of their participation.