Keywords

1 Introduction

The twenty-first-century information society is unprecedented in its demands on students to understand and learn from sources that express conflicting views on controversial issues (Alexander & the Disciplined Reading and Learning Research Laboratory, 2012; Goldman et al., 2011; Rouet, 2006). For example, consider students trying to answer the question of whether nuclear power plants are a safe and efficient way to produce electricity, or whether they represent a serious threat to people as well as the environment. These students can draw on a wealth of informational resources available through a variety of print and digital outlets. However, their attempts to provide well-founded answers require that they synthesize or integrate information from source materials expressing diverse and even contradictory viewpoints. Moreover, the credibility of those sources is often a key issue, which makes students’ evaluation of sources an essential part of their critical reading and learning skills, not least when encountering competing knowledge claims about controversial socio-scientific issues, such as the one illustrated above (Britt, Richter, & Rouet, 2014; Bromme & Goldman, 2014; Tabak, 2016).

Of note is that this focus on the importance of critical source evaluation in students’ reading and learning brings together perspectives that have largely been isolated in theory and research, that is, theory and research on how consumers of information evaluate the trustworthiness of the sources they encounter on the one hand, and theory and research on learning from textual information on the other (Richter & Rapp, 2014). Thus, when social psychologists in the field of persuasion investigate how recipients of persuasive messages evaluate the trustworthiness of information based on source features (e.g., author credentials or publisher), they rarely take into account how such evaluation is related to the learning of textual information, and when educational and cognitive psychologists investigate learning from text, they rarely study whether or how readers evaluate the trustworthiness of incoming information. Recent developments, especially in research on understanding and learning from multiple conflicting texts, suggest that source evaluation and learning from text may be more closely interwoven than traditionally assumed, however (Britt et al., 2014; Kendeou, 2014).

Although critical reading and learning skills are currently considered essential in democratic societies around the world, researchers, educators, and policy makers in many countries are concerned that they are not adequately developed through schooling, not even at the level of secondary education. Therefore, it is vital to identify factors that affect how such skills develop, and to design instructional interventions to foster them. A main assumption in this chapter is that one viable path to improving students’ critical reading and learning is through developing their source evaluation skills, that is, their ability to judge the credibility or trustworthiness of sources by attending to available or accessible information about the source, such as who authored it or what kind of source it is (e.g., an encyclopedia article or a blog posting) (Bråten, Stadtler, & Salmerón, in press; Bråten, Strømsø, & Britt, 2009). In the following, we will therefore discuss theoretical and empirical advances in this area by focusing on aspects of students’ source evaluation during reading to learn about controversial issues and how this may vary with individual and textual factors, on how source evaluation skills may be promoted through systematic instruction, and on the potential effects of such instruction on students’ learning outcomes. By targeting reading to learn about controversial socio-scientific issues, such as the production of genetically modified food or the safety of nuclear power plants, the chapter also addresses important aspects of science literacy, with science education researchers (Linn & Eylon, 2006; Norris, Phillips, & Korpan, 2003; Phillips & Norris, 1999; Yang & Tsai, 2010) highlighting the challenges for students to critically evaluate and learn from popular media reports of science, especially when they deal with ill-structured problems in the form of controversial socio-scientific issues.

2 Theoretical Frameworks

Researchers interested in reading and learning contend that the twenty-first-century information society offers new opportunities, but also new potential pitfalls for students (Alexander & the Disciplined Reading and Learning Research Laboratory, 2012; Brand-Gruwel & Stadtler, 2011; Britt & Gabrys, 2002; Leu, Kinzer, Coiro, Castek, & Henry, 2013). On the one hand, rapid, almost instantaneous access to a wide range of up-to-date information, particularly when retrieving texts via Internet search engines, can potentially broaden and deepen comprehension. On the other hand, such access requires additional competencies, especially in terms of a realization that texts are socially constructed artifacts, written by a particular author, for a particular publication venue, at a particular point in time, and so forth (Britt, Rouet, & Braasch, 2013). In addition, learning often requires that students put forth the effort to integrate content information distributed across multiple texts (Afflerbach & Cho, 2009; Bråten & Strømsø, 2012; Cho, 2014; Goldman, Braasch, Wiley, Graesser, & Brodowinska, 2012).

For example, reading to learn about controversial issues such as whether artificial sweeteners or cell phones may pose any health risks requires that students allocate processing efforts toward integrating higher-quality information reported by reliable sources (Bråten, Braasch, Strømsø, & Ferguson, 2015; Goldman et al., 2012; Wiley et al., 2009), which seems particularly important when they read to better inform themselves to be able to make important behavioral decisions (e.g., Should I reduce my intake of artificial sweeteners? Should I restrict my daily cell phone usage?). The documents model framework of Britt, Rouet, and colleagues (Britt et al., 2013; Britt, Perfetti, Sandak, & Rouet, 1999; Perfetti, Rouet & Britt, 1999; Rouet, 2006) is a theoretical account of learning situations involving conflicting messages, behooving students to attend to and incorporate information about the source of the message into their mental representations of the issue. In essence, the documents model framework explains how good readers and learners deal with multiple textual sources presenting different or conflicting views on the same issue by constructing integrated mental representations of the issue and, at the same time, keeping track of the sources associated with the different pieces of information. According to the documents model framework, it is crucial to attend to, evaluate, and at times remember the sources of different pieces of information because the tagging of information about the sources themselves (e.g., the author or the publisher) to different perspectives on the issue allows readers to consider the trustworthiness of the information in light of the features of the sources. The perceived trustworthiness of information may, in turn, influence the weight and position that the information is assigned in learners’ overall representations of the issue. Subordinating or devaluing information from incompetent, discredited, or strongly biased sources, and, at the same time, giving prominence to information from more trustworthy sources will likely result in more appropriate, higher-quality mental representations of the issue (Bråten, Britt, Strømsø, & Rouet, 2011). It is thus a main assumption of the documents model framework that effective learning about a controversial issue requires a consideration of available source feature information in addition to a consideration of the connections one could make among the semantic content information offered within multiple documents.

Recently, Stadtler and Bromme (2014) proposed the content-source integration model to further explicate the cognitive processes and resources that learners draw on when encountering conflicting information about a particular issue. Like the documents model framework, this model assumes that one way to restore a coherent representation of the issue after a conflict has been detected is to attribute the conflicting views to different sources. If learners, in addition, want to actually resolve the detected conflict, however, they may also need to evaluate the trustworthiness of the different sources, asking themselves “whom to believe” regarding the issue at hand. In particular, this approach becomes pertinent and even necessary when learners are not able to evaluate the validity of conflicting information directly, for example, by judging the truth value of explanations and arguments set forth in light of prior knowledge, which is more often than not the case when students read about complex socio-scientific issues of which they have only limited prior knowledge (Bromme & Goldman, 2014).

Other recent elaborations of the documents model framework (Britt et al., 2013; Strømsø & Bråten, 2014; Strømsø, Bråten, Britt, & Ferguson, 2013) emphasize the need to pay attention to sources cited or embedded within texts in addition to the sources of separate texts (i.e., the main sources), suggesting that good learners may link content information to source information presented within a text (e.g., a cited author) and embed this source information within the source of the text itself (e.g., attribute particular content information to a particular author cited by a particular publication). The importance of contextualizing embedded sources and their messages within main sources may also be illustrated by situations where people read to inform themselves about controversial issues such as whether artificial sweeteners may pose any health risks in order to make behavioral decisions. In such a situation, noting and remembering whether a message by a nutritionist stating that all thoughts of health risks could be discarded is included in a document published by a large brewery or in a document published by the National Food Safety Authority may help consumers evaluate the trustworthiness of the embedded source and the message it conveys.

Thus, although explanations, arguments, and conclusions presented by various sources may certainly conflict due to the tentative status of what is known, discrepancies may also arise because sources attempt to persuade learners toward their positions. As another example, consider a cell phone industry representative urging learners to disregard all research suggesting cell phone–brain tumor relationships, potentially to guard against decreases in sales. Such situations involving attempts to sway learners toward particular points of view highlight the relevance of frameworks based on social psychology research on persuasion. For example, research guided by the elaboration likelihood model (ELM) (Petty & Briñol, 2012; Petty & Wegener, 1999) has shown that information about the source (e.g., the author) of a message may inform evaluative judgments of an issue and that deeper-level elaboration of source information will likely increase its contribution to those judgments. While the ELM emphasizes that source information can affect judgments of issues whether elaboration is high or low, research within the heuristic-systematic model of Chen and Chaiken (1999) focuses on how judgments are based on heuristic processing of source information, that is, low-effort activation and application of rules stored in memory (e.g., “expert statements can be trusted”). Such rules may be cued by salient and easily processed source features, and their use may lead to judgments congruent or incongruent with judgments formed on the basis of more analytic and comprehensive processing of the actual content of the message. In brief, social psychology models on persuasion may complement the theoretical grounding of empirical work on source evaluation in students’ critical reading and learning, emphasizing that processing of source information at different levels of depth plays an important role in judging the trustworthiness of persuasive texts. As noted above, such judgment is also important when learning about complex and controversial socio-scientific issues.

3 Empirical Work

3.1 Students’ Source Evaluation

Many studies show that students, even at secondary and post-secondary levels, do not attend to source features (i.e., author, type of publication, venue, and place and date of creation) in order to evaluate for trustworthiness when they are reading multiple texts to learn about controversial issues (Brem, Russels, & Weems, 2001; Britt & Aglinskas, 2002; Kiili, Laurinen, & Marttunen, 2008; Maggioni & Fox, 2009; Nokes, Dole, & Hacker, 2007; Stahl, Hynd, Britton, McNish, & Bosquet, 1996; Walraven, Brand-Gruwel, & Boshuizen, 2009; Wineburg, 1991). Research suggests that such lack of source feature consideration to establish trustworthiness has consequences for effectiveness and efficiency when acquiring new knowledge. In Kiili et al. (2008), for example, the majority of comments secondary school students produced while evaluating information resources concerned content relevance, with very few instances reflecting credibility assessments based on the available source feature information. Kiili et al. (2008) characterized some students as «uncritical readers» due to their source feature inattention, a designation evidenced by a greater proportion of time spent reading information from less reliable texts. These findings correspond with those of Wineburg (1991) and Maggioni and Fox (2009), both of which documented minimal verbal protocol evidence that students use source features when they are reading to learn from multiple history texts. Both Britt and Aglinskas (2002) and Stahl et al. (1996) analyzed the notes produced when reading multiple history texts. Similar to the studies cited above, they found that students rarely mentioned source information in the notes they generated, which was related to poor performance on source knowledge questions after reading. Finally, students have been found to use fictional information retrieved from novels and movies as facts to support their arguments, which can be viewed as additional evidence of poor source evaluation (Britt & Aglinskas, 2002; Seixas, 1994).

Scholars interested in digital media technologies, especially with respect to the Internet, have given the issue of trustworthiness of sources and information particular attention in the last decade. One reason is that professional gatekeeping is essentially lacking on the Web, with posted texts seldom having explicit review policies or undergoing the quality control most paper-based publications do. Thus, judgments of trustworthiness are more often left with the individual learners or information consumers themselves. The challenges increase because the author and other source feature information that is typically available in printed texts is often masked, unavailable, or, at best, hard to interpret on many Web sites (Britt & Gabrys, 2000; Flanagin & Metzger, 2008). Given this backdrop, it is hardly surprising that that students “rarely to occasionally” attempt to verify the credibility of information obtained via the Internet (Metzger, Flanagin, & Zwarum, 2003). Sanchez, Wiley, and Goldman (2006) provided evidence that—even within a sample of college undergraduates—understandings of the methods used to evaluate the trustworthiness of Web sites were fragile, with considerable student problems in justifying evaluations of trustworthiness. Moreover, readers often draw on superficial features, seldom judging information credibility based on author credentials (Metzger et al., 2003). For example, when judging the trustworthiness of Web-based health information, university students often use superficial or inadequate criteria, such as whether documents include information-redundant illustrations (Wittwer, Bromme, & Jucks, 2004), their preconceptions or first impressions of a Web site’s layout (Stadtler & Bromme, 2007), or even the picture of the site owner (Eysenbach, 2008). Such problems are even more salient with younger students, found to rely heavily on surface credibility markers (e.g., more authors, presence of numerical values), and seldom moving beyond a selected site to look for corroborating information (Brem et al., 2001). A particular challenge noted by Strømsø et al. (2013) seems to be that students may link content information to sources cited in a text without embedding this source information within information about the source of the text itself, with this involving a decontextualization of the content information that makes it harder to evaluate. It may be essential to note, for example, whether a particular scientist making a particular claim is cited in a scientific journal or in a tabloid.

3.2 Benefits of Source Information

As problematic and challenging as source evaluation may be for students across educational levels, several correlational studies have shown students’ consideration of trustworthiness based on source features to be linked to their learning about controversial issues from diverse texts (Anmarkrud, Bråten, & Strømsø, 2014; Barzilai & Eshet-Alkalai, 2015; Barzilai, Tzadok, & Eshet-Alkalai, 2015; Bråten et al., 2009; Goldman et al., 2012; Strømsø et al., 2010; Wiley et al., 2009). For example, Bråten et al. (2009) demonstrated a relationship between students’ judgments of the trustworthiness of texts on global warming based on their respective source features and their learning from the texts, both of which were assessed after reading when students did not have access to the texts. In that study, results indicated that trust in reliable sources, indeed, seems to matter, even if learners are not necessarily able to justify their trust in terms of relevant source features, such as document type and publisher. If they are, such justifications may represent a level of sourcing skills capable of boosting performance even further, however (Bråten et al., 2009). Recent studies using think-aloud methodologies also demonstrate that strategies focused on differentiating more versus less useful texts during reading and using trustworthiness criteria when doing so relate to better learning. For example, Anmarkrud et al. (2014) and Barzilai et al. (2015), who also had students read multiple texts about controversial socio-scientific issues (viz. cell phone radiation and desalination), demonstrated relationships between attention to and evaluation of information sources produced during reading and argumentation sophistication and source use in post-reading essays (see also, Barzilai & Eshet-Alkalai, 2015, for a recent documentation of the linkage between students’ sourcing skills and their written argumentation). In the same vein, Goldman et al. (2012), who contrasted the kinds of processing that better and poorer learners’ displayed during reading more and less reliable texts about a complex scientific issue, found that better learners were more likely to evaluate the source credibility of texts compared with poorer learners. Related to this finding, poorer learners spent more time reading unreliable texts and were more likely to include erroneous concepts in post-reading essays. In brief, the correlational research suggests that to successfully construct complete, accurate mental representations of controversial issues that can be applied in novel situations, be involved in argumentative reasoning, and form the basis of important behavioral decisions, students must apply more sophisticated source evaluation strategies in efforts to selectively process higher-quality information. However, it is clearly the case that intervention work is needed to draw stronger conclusions concerning causal relationships between these variables. Before turning to interventions, we will discuss the roles of individual and textual factors in source evaluation as well as students’ difficulties distinguishing between content relevance and source trustworthiness.

3.3 Individual and Textual Factors in Students’ Source Evaluation

Although much remains to be known about individual factors associated with source evaluation, there is currently evidence to suggest that students’ working memory capacity (Braasch, Bråten, Strømsø, & Anmarkrud, 2014) and their prior knowledge about the issue (Braasch, Bråten, Strømsø, et al., 2014; Bråten, Strømsø, & Salmerón, 2011; Rouet, Britt, Mason, & Perfetti, 1996; Strømsø et al., 2010) are positively correlated with critical evaluation of sources when reading about controversial issues. Likewise, students’ implicit theories of intelligence (i.e., the degree to which they consider their own intelligence to be malleable rather than fixed; Dweck, 1999) have recently been linked to source evaluation. That is, students considering intelligence to be a malleable, increasable quality were also more likely to discriminate between more and less useful documents about a controversial issue based on trustworthiness assessments (Braasch, Bråten, Strømsø et al., 2014). Other individual difference variables that have been linked to students’ source evaluation include their beliefs about knowledge and knowing concerning a particular domain or issue, for example, beliefs regarding the certainty or simplicity of knowledge or the justification of knowing (Barzilai et al., 2015; Barzilai & Eshet-Alkalai, 2015; Bråten, Ferguson, Strømsø, & Anmarkrud, 2014; Kammerer, Amann, & Gerjets, 2015; Kammerer, Bråten, Gerjets, & Strømsø, 2013; Strømsø, Bråten, & Britt, 2011). In this vein, Strømsø et al. (2011) suggested that students believing knowledge about an issue to be complex may be less likely to rely on information from sources that often simplify rather than elaborate upon complex issues, such as a newspaper. Additionally, these authors found that the belief that justification for knowing should refer to reasoning, scientific inquiry, and the evaluation and integration of multiple sources was linked to students’ trust in research-based sources and attention to a variety of source features when evaluating such sources on the issue of global warming. Finally, there is some evidence to suggest that students’ prior attitudes and motivations play a role in situations that require evaluation of source information (Andreassen & Bråten, 2013; Braasch, Bråten, Britt, Steffens, & Strømsø, 2014; Strømsø et al., 2010; van Strien, Brand-Gruwel, & Boshuizen, 2014). For example, Braasch, Bråten, Britt et al. (2014) found that when reading inaccurate arguments about controversial health-related issues, students remembered the sources of those arguments better, the stronger their prior attitudes about the issues. Presumably, when textual arguments are not sufficient to support or strengthen prior attitudes because the arguments are inaccurate, readers holding stronger attitudes about the issues may turn to source information (e.g., a reliable author, a well-respected publication venue) to bolster their prior attitudes. Regarding motivation, Strømsø et al. (2010) found that students’ topic interest was positively related to their memory for source information when reading multiple texts about global warming, and, more recently, Andreassen and Bråten (2013) showed that learners’ source evaluation self-efficacy (i.e., their perceived capability to evaluate the trustworthiness of sources) predicted their reliance on relevant source features related to both the product and the producer of Web sites when evaluating their trustworthiness.

However, not only individual but also textual factors have been shown to play a role in source evaluation. Braasch, Rouet, Vibert, and Britt (2012) launched the idea that learners’ attention to source information (i.e., to “who said what”) might increase when different sources provide discrepant accounts. More specifically, these authors proposed that when different sources make conflicting claims about a controversial situation or issue, one mechanism for resolving the resulting break in situational coherence (Graesser, Singer, & Trabasso, 1994) and constructing an integrated mental representation may be to link discrepant content information to the respective sources. Referring to this assumption as the discrepancy-induced source comprehension or D-ISC assumption, Braasch et al. (2012) provided preliminary evidence in two experiments where undergraduate students read brief news reports containing two claims that were either conflicting or consistent. In accordance with the D-ISC assumption, online and offline data, respectively, indicated that conflicting claims promoted deeper processing of and better memory for the sources of the claims, as compared to consistent claims. Recently, de Pereyra, Belkadi, Marbach, and Rouet (2014) showed that similar effects also can be observed with lower-secondary students, but with stronger effects obtained for undergraduates than for seventh- and ninth-graders. Braasch, McCabe, and Daniel (2016) corroborated these findings by demonstrating that when different sources provided semantically congruent arguments, readers were less attentive to source information relative to a control condition involving distinct arguments.

Of note is that in the Braasch et al. (2012) and the de Pereyra, Belkadi et al. (2014) studies, the conflicting claims and their respective sources were embedded in a single text (i.e., a brief news report). However, the D-ISC assumption has also received empirical support in reading contexts where conflicting claims about the same issue are presented in multiple distinct texts (Kammerer & Gerjets, 2014; Stadtler, Scharrer, Skodzik, & Bromme, 2014; Strømsø & Bråten, 2014; Strømsø et al., 2013). For example, Kammerer and Gerjets (2014) found that conflicts between the claims of an institutional Web page and several other, partly commercial, Web pages on a controversial fitness-related issue made students allocate more attention to the source of the institutional Web page during reading and include more source citations in their written summaries. In the same vein, Stadtler, Scharrer, et al. (2014) found that when the existence of conflicting claims across multiple texts on a controversial health issue was explicitly signaled through rhetorical means (e.g., by starting a text with the following phrase: “Contrary to what some health professionals argue, …”), students included more source citations when generating essay responses on the issue than when conflicts were not explicitly signaled.

It seems fair to say that so far, less is known about students’ attention to and use of source information when reading single text compared to multiple texts. For example, learners might be unlikely to separate source and content when they read only a single text on a topic or a single perspective without controversy (Braasch et al., 2016; Bråten, Strømsø, & Andreassen 2016; Britt et al., 2013). Even when a controversy is discussed in a single text, however, there may be less attention to source information than when a controversy is discussed across multiple texts. Admittedly, Braasch et al. (2012) and de Pereyra, Belkadi et al. (2014) found that discrepant views on an issue presented within a single text increased attention to and use of source information relative to a condition where consistent views on the same issue were presented. Other work (de Pereyra, Britt, Braasch, & Rouet, 2014; Stadtler, Scharrer, Brummernhenrich, & Bromme, 2013; Steffens, Britt, Braasch, Strømsø, & Bråten, 2014), however, suggests that source information for inconsistencies within a single text is mostly disregarded. For example, Steffens et al. (2014) found that students’ memory for source information when reading single texts was poor, with no evidence that source information was recalled better when inconsistent information was presented within the texts. Consistent with findings reported by Stadtler et al. (2013), one reason for this may be that students are less likely to attend to and remember conflicting views and controversies when they are discussed within single texts compared to across texts.

In brief, whether conflicting information is presented in a single text or in multiple texts may impact the extent to which students focus their attention on source information in addition to content. Likewise, whether conflicting information presented in multiple texts is explicitly highlighted through cross-referencing or not seems to matter in this regard. Recently, researchers interested in source evaluation in students’ critical reading and learning have also started to address how individual factors may interact with text factors in both single- and multiple-text contexts (Maier & Richter, 2013; Barzilai & Eshet-Alkalai, 2015; Bråten, Salmerón, & Strømsø, 2016). Maier and Richter (2013) presented findings consistent with the idea that a discrepancy between students’ prior beliefs regarding controversial issues and textual information may trigger attention to the source of a text. Thus, when students read two texts conflicting and two texts consistent with their prior beliefs on the topics of global warming or vaccination, these authors found that students displayed better source memory for texts presenting arguments in conflict with their prior beliefs. For example, students believing global warming to be caused by human activities and reading that it has natural causes displayed better source memory than students believing global warming to be caused by human activities and reading that it is caused by human activities. Building on de Pereyra, Britt et al.’s (2014) extension of the D-ISC model to situations involving discrepancies between learners’ prior knowledge and textual information, Bråten, Salmerón et al. (2016), in a single text study, also showed that students’ memory for source information may increase with the discrepancy between textual claims and prior beliefs. This suggests that when readers judge content information to be implausible in light of their prior beliefs about the topic, they may be more likely to seek support from available information about the source to make sense of the content. Finally, in a multiple-text study, Barzilai and Eseth-Alkalai (2015) found that conflicts between sources improved attention to and memory for “who said what” only among readers with higher levels of multiplist and evaluativist epistemic thinking (Kuhn, 2001).

Despite the progress that has been made in this area of research, there is a great need to further investigate individual and textual factors contributing to source evaluation when students read about controversial socio-scientific issues (Braasch, de Pereyra, & Bråten, 2015; Bråten et al., in press). Among the potentially contributing individual factors in need of further investigation are general cognitive competencies such as cognitive reflection (Frederick, 2005; Kahneman, 2011) and critical thinking (Bonny & Sternberg, 2011; Halpern, 2007), as well as students’ general and domain-specific knowledge of relevant source features (Rouet, Ros, de Pereyra, Macedo-Rouet, & Salmerón, 2013). First, it is important to clarify to what extent critical source evaluation is an aspect of more general cognitive competencies. Second, the relationship between declarative knowledge of relevant source features and sourcing activities during reading (i.e., procedural source knowledge) needs to be clarified. Likewise, there are several additional textual factors that need to be further researched. For example, source salience, that is, how detailed and elaborated the descriptions of the sources are and where they are located, may impact the extent to which students focus their attention on source information (Britt et al., 2013; Strømsø et al., 2013). In addition, because of the consequentiality of receiving unreliable information, texts that focus on unsettled and controversial issues related to people’s health or safety (i.e., risk issues) may make questions of trust in sources particularly pertinent (Jungerman, Pfister, & Fischer, 1996; Kolstø, 2001). Finally, although some recent evidence suggests that characteristics of the reader and characteristics of the text(s) may interact to facilitate or constrain attention to and memory for source information, the issue of reader–text interaction is wide open for further research.

3.4 Distinctiveness of Content Relevance and Source Trustworthiness When Dealing with Controversial Issues

Clarifying students’ judgments of content relevance in relation to their judgments of source trustworthiness is a vital issue with theoretical as well as practical implications. As we previously stated, students’ text evaluations more typically concern content relevance than source trustworthiness (Braasch, Bråten, Strømsø, Anmarkrud, & Ferguson, 2013; Kiili et al., 2008). For example, when tasked to select and use information resources for a particular purpose, they are likely to base their selection and use on the relevance of the content (i.e., the perceived instrumental value of the content for their purpose; McCrudden & Schraw, 2007) and tend to disregard the credibility of the source (e.g., the expertise of the author; Pornpitakpan, 2004). A pertinent question is, however, to what extent are content relevance and source trustworthiness psychologically distinct constructs for student readers. If they are psychologically blurred, some of students’ difficulties with source feature evaluations of trustworthiness may be due to their difficulties in distinguishing such processing from evaluations based on the relevance of the content. This may lead them to just focus on whether a text deals with matters connected to the issue they are inquiring or contains key words matching their search terms. Researchers may take for granted that relevance and trustworthiness are psychologically distinct categories because they are orthogonal in logical terms, meaning that something can be relevant but not trustworthy and vice versa, a view also supported by studies of expert readers (Afflerbach & Cho, 2009; Pressley & Afflerbach, 1995). Still, student readers may come to overlook source information because they do not clearly realize that evaluating trustworthiness based on source features is a process above and beyond evaluating the relevance of the content information (Macedo-Rouet, Braasch, Britt, & Rouet, 2013).

Recently, McCrudden, Stenseth, Bråten, and Strømsø (2016) investigated this issue by asking secondary school students to select the most useful texts for a given purpose among texts varying with respect to both content relevance and author expertise (as an indication of source trustworthiness). In this study, participants were presented with texts concerning two different controversial issues varying in familiarity, climate change (more familiar) and nuclear power (less familiar), and selected the texts that they deemed most useful for giving a presentation to their class about each of those issues. In brief, the results indicated that the extent to which students distinguished between and took the two constructs into consideration when selecting texts varied with the familiarity of the issue. Thus, content relevance was equally valued and highly salient to students for both issues such that they clearly selected more-relevant than less-relevant content. However, the same students distinguished much less between high and low author expertise when they selected texts for the more familiar issue than when they selected texts for the less familiar topic, with the salience of content relevance seemingly overshadowing the salience of author expertise in the former case.

From an educational perspective, it is important to understand to what extent content relevance and source trustworthiness are psychologically distinct for students when reading to learn about controversial issues, so that is possible to develop effective interventions that help them select and use relevant information from trustworthy sources. Thus, making students aware of how they actually evaluate the usefulness of textual information resources across issues may be a first step to help them strike an adaptive balance between content relevance and source trustworthiness in this evaluation process. Much further research is needed to understand the extent to which source trustworthiness is viewed as distinct from content relevance across diverse issues for students at different educational levels, however.

3.5 Source Evaluation Interventions

Because research suggests a general lack of consideration of the importance of available source features among students, and because source feature evaluation seems tantamount when reading to learn about controversial issues using multiple texts, students appear to require interventions targeting the acquisition of source feature evaluation strategies. Accordingly, some researchers have developed interventions for elementary, secondary, or post-secondary students, as well as for adults out of school, to improve their consideration of source features when working with multiple texts (Braasch et al., 2013; Britt & Aglinskas, 2002; De La Paz & Felton, 2010; Graesser et al., 2007; Kammerer et al., 2015; Macedo-Rouet et al., 2013; Mason, Junyent, & Tornatora, 2014; Nokes, 2014; Nokes et al., 2007; Reisman, 2012; Sanchez et al., 2006; Stadtler, Scharrer, Macedo-Rouet, Rouet, & Bromme, 2016; Walraven, Brand-Gruwel, Boshuizen, 2010, 2013; Wiley et al., 2009).

For example, in a much cited study, Britt and Aglinskas (2002) developed a computer-based tutorial to promote students’ attention to source features of multiple historical texts. Inquirers were first provided with direct instruction on three strategies (sourcing, contextualization, and corroboration). During reading, note cards appeared at the bottom of each screen, which required students to provide entries about source features of texts (author, type, and date of publication), as well as about content information. Results indicated that students who received the intervention cited more sources in their notes, answered more source knowledge questions correctly on a post-reading transfer test, and cited more sources in their post-reading essays than did students in a control group.

In an example from the domain of science, Wiley et al. (2009) instituted the SEEK intervention, which focused on ways to instruct students on four important facets of texts: the Source of the information in each text, the nature of the Evidence that was provided in each text, the fit of a text’s evidence into the Explanation of the phenomenon, and the fit of the new information within a text with prior Knowledge. The students in the treatment group were first provided with declarative information and received instruction regarding ways to evaluate multiple texts with respect to the four components of SEEK. They then read multiple texts that varied in reliability and answered questions indicative of the criteria in the declarative information. After reading, they rank-ordered the texts based on their interpretations of the texts’ reliability, justified their rank-orders, and compared their rankings with those generated by experts using the same text set. During an application task using a novel set of multiple texts, SEEK students were better at discriminating the reliability of the texts, included more correct and less incorrect causes in post-reading essays, and displayed better pre–post-learning gains relative to controls.

Finally, in one of the very few interventions designed to promote secondary school students’ implementation of source evaluation strategies in multiple science texts inquiry contexts (see also, Stadtler, Scharrer, Macedo-Rouet, et al., 2016), Braasch et al. (2013) developed and implemented an intervention harnessing activities that typify science classrooms. At the same time, they extended prior work by acknowledging and targeting inappropriate evaluation strategies that secondary school students frequently employ when they interact with multiple scientific texts, building on a contrasting-cases approach recently substantiated in other instructional areas (Gadgil, Nokes-Malach, & Chi, 2012; Rittle-Johnson & Star, 2009). Two hypothetical students’ text evaluation strategy protocols were designed: One featured more sophisticated strategies focusing on source features, more commonly enacted by experts and better college students, and a second featured less sophisticated strategies focusing on the relevance of content information (i.e., key words), more commonly enacted by secondary school students. A series of classroom-based activities required students to compare and contrast the two protocols to decide which were the best strategies when analyzing multiple texts on a controversial socio-scientific issue and why. Findings demonstrated that students who previously participated in the intervention activities included more scientific concepts from more reliable texts when writing essays based on more or less reliable texts on a different issue, displayed more expert-like rankings of the usefulness of the set of multiple texts, and offered more principled justifications for their rankings based on source feature evaluations of trustworthiness compared to students who instead received typical classroom instruction. Although promising, the Braasch et al. (2013) study can be considered limited by the facts that it was a very brief intervention (lasting only 60 min), that the intervention was implemented by the researchers rather than by the regular class teachers, that students’ learning from texts was assessed quite narrowly (by their inclusion of scientific concepts in their essays), and that no follow-up data demonstrating long-term effects were produced.

4 Future Directions

Both inside and outside of classroom contexts, students at different educational levels are increasingly confronted with texts on unsettled and controversial issues that vary with respect to reliability. Further advancement of our understanding of critical source feature evaluation and its relation to learning processes and learning outcomes is therefore needed, providing a basis for theory-based educational innovations in the area. In the following, we briefly discuss some future goals for research on students’ source evaluation skills that are likely to have important theoretical as well as educational implications.

A first goal is to further investigate individual and textual factors contributing to students’ source evaluation when they read to inform themselves about controversial socio-scientific issues. So far, we have only limited knowledge of how students’ cognitions, beliefs, attitudes, and motivations may contribute to their source evaluation in scientific text contexts, and even less is known about how such individual difference variables may be interrelated. One way to fill this knowledge gap is therefore to include such variables in the same study to examine how they separately and in concert may contribute to students’ source evaluation practices. In addition, aspects of the textual materials need to be further investigated to better understand how textual factors may contribute to source evaluation differences. Conflicting views and the sources that convey them may be presented in multiple texts with cross-references to the sources of the other texts, in multiple texts without such cross-references, or in one single text. The extent to which such textual variation may influence students’ source evaluation practices is currently not well understood, however. Moreover, further investigating interaction effects of individual difference variables with textual factors on students’ source evaluation (e.g., whether effects of individual differences in prior topic knowledge or epistemic beliefs may be moderated by explicit cross-referencing in multiple conflicting texts) would, indeed, traverse new empirical territory and contribute to our theoretical understanding of source evaluation in student readers. In such experimental work, dependent measures might be students’ ability to identify and understand the conflicting perspectives as well as their judgment of the trustworthiness of each perspective, spontaneous attribution of trustworthiness to features of the sources, cued recall of features of each source, and, possibly, justifications for intended behavioral change based on source feature evaluations of the sources.

A second, related goal is to further examine students’ understanding of the distinction between content relevance and source trustworthiness, as well as their ability to flexibly balance those criteria when selecting and using information resources on controversial issues in inquiry contexts. For this research purpose, mixed-methods approaches (Creswell & Plano Clark, 2007) combining quantitative and qualitative data sources seem suitable. Concerning quantitative facets of the research, students may be tasked to select and use information resources varying with respect to both content relevance and source trustworthiness to answer inquiry questions about different controversial issues, with their selection behavior and their construction of evidence-based arguments about each issue analyzed to indicate the extent to which they base their judgments on content relevance, source trustworthiness, or perhaps both. In this design, task instructions may be manipulated to see whether they can affect students’ orientations toward the content and sources of competing knowledge claims, that is, toward “what is true” and “whom to believe,” respectively (Stadtler & Bromme, 2014). Moreover, follow-up interviews with purposefully selected individuals who differ with respect to selection and use of information resources (e.g., base their selection and use primarily on content relevance vs. on source trustworthiness) may provide qualitative data about their understanding of the distinction between content relevance and source trustworthiness as well as their underlying reasoning when considering or ignoring those criteria. By varying task instructions, the insights derived from this line of research may facilitate the construction of materials for use in theory-based source-evaluation interventions.

A third goal is to further address the question of how students’ acquisition and application of sophisticated source evaluation strategies can be effectively and efficiently promoted. Although quite a few studies indicate that students’ source evaluation strategies can be improved through instruction (e.g., Braasch et al., 2013; Britt & Aglinskas, 2002; Wiley et al., 2009), longer-term classroom-based intervention research targeting source evaluation when students work with information resources on controversial socio-scientific issues is conspicuous by its absence. Moreover, while prior work has mainly consisted of researcher-led interventions, it seems essential to investigate how efforts to promote source evaluation may be incorporated into regular subject-matter instruction and conducted by classroom teachers through means of professional support, highlighting needs to further develop the professional competencies of teachers and assessing implementation quality through the collection of process data (e.g., trace and observation data). Following Braasch et al. (2013), such intervention work may profitably utilize a contrasting-cases approach, thus providing an instructional context that acknowledges students’ default, yet inappropriate evaluation strategies drawing primarily on content relevance in close juxtaposition with sophisticated source evaluation strategies taking relevant source features into consideration. Key design features of such an approach may include illustrations of both inappropriate and appropriate source evaluation strategies provided by hypothetical peers reading about different controversial socio-scientific issues in multiple texts, solicitations for students’ explanations of the (lack of) importance of each identified strategy for multiple-text inquiry, and solicitations for participation in dyadic and instructor-lead, whole-class discussions concerning the strategies. In this way, contrasting cases can be embedded within several tasks that typify classroom-based instructional practices and framed in pedagogically meaningful ways. Moreover, competencies acquired via the contrasting-cases approach should be practiced in “real-world” contexts of retrieving, evaluating, and comprehending diverse Internet documents for inquiry purposes. It seems important that intervention effects are evaluated in terms of application tasks requiring that students transfer source evaluation strategies to novel situations where they read to learn about other issues or have to make well-grounded behavioral decisions. In particular, such application tasks should assess students’ ability to build an integrated understanding of a controversial issue based on the most trustworthy information. Finally, a lack of follow-up data assessing long-term effects of source evaluation interventions is a serious limitation of previous work that future research needs to address.

5 Conclusion

That students selecting and using textual resources concerning controversial issues more often than not tend to disregard source information and pay attention only to the content is especially problematic in the current reading context, where the abundance of easily accessible information of dubious quality requires that students more than ever are capable of critically evaluating the sources they come across. Unfortunately, this also implies that many individuals now enter higher education and the workplace lacking critical reading and learning skills (see also, OECD, 2011). In the current chapter, we have addressed this broad educational issue and called for further research that will not only provide basic scientific knowledge but also generate guidelines for essential, evidence-based pedagogical innovations. Systematic research on critical reading and learning with a focus on source evaluation extends ongoing mainstream international research on student reading and learning. Such extension, however, is necessary to advance our understanding of the kind of learning and literacy required in twenty-first century and create innovations that help students become critical readers and learners rather than passive consumers of the diverse information resources they encounter.