1 Introduction

There is widespread agreement that science education should produce scientifically literate people who can relate and apply science to their lived experiences (Feinstein, 2011; National Research Council, 2012)—e.g., making decisions about socioscientific issues (SSIs). SSIs are personally- and socially-meaningful science-related issues (Rudolph & Horibe, 2016; Zeidler et al., 2005). Making informed decisions about SSIs requires people to evaluate the trustworthiness of associated scientific knowledge claims. Otherwise, people are vulnerable to being persuaded to take positions that are not in their own best interest (or in the interest of their communities). However, SSI-related knowledge claims are particularly challenging to evaluate because they are often uncertain, due to inherent uncertainty in scientific claims, and conflicting, due to different stakeholders marshalling evidence to support different viewpoints (Kolstø, 2001).

There are many examples of people making decisions about SSIs without critically evaluating associated knowledge claims. For example, in deciding not to vaccinate their children, parents may be convinced by personal testimonials—rather than critically evaluating testimonials in light of scientific evidence refuting a link between vaccines and autism. In order to support informed decision-making about SSIs, it is crucial to describe, in detail, the practice of critically evaluating uncertain and conflicting scientific claims. This practice has been considered within the larger context of research on epistemic cognition (e.g., Chinn et al., 2011; Lombardi et al., 2016; Sinatra et al., 2014).

Within the tradition of epistemic cognition, Duncan et al. (2018) recently proposed Grasp of Evidence (GOE) as a theoretical framework to describe and support understandings of evidentiary reasoning necessary for engaging with science as a ‘competent outsider’ (Feinstein, 2011). Duncan et al. argue that, although science education standards, such as the US Next Generation Science Standards (NGSS Lead States, 2013), highlight the importance of evidence, they do not explicate ‘the epistemic features and roles of evidence’ (p. 909) necessary for sophisticated and complex engagement with evidence. The GOE framework seeks to differentiate among different forms of evidence and among different ways of engaging with evidence. GOE is particularly relevant for our interest in SSIs because uncertain and conflicting claims require ‘evaluation in a framework of alternatives and evidence’ (Kuhn et al., 2017, p. 233).

In this chapter, we take up Duncan et al.’s (2018) call to ‘explore the utility of [the GOE] framework as an analytic tool’ (p. 933), using this framework as the lens for examining undergraduate students’ critical evaluation of SSI-related knowledge claims. In particular, we ask: How do students’ responses to socioscientific scenarios reveal their GOE? By answering this question, we explore use of the GOE to examine evaluation of SSI-related scientific knowledge claims and use our data to provide empirical illustration of the GOE framework.

2 Grasp of Evidence: Laypeople’s Understanding of Evidence

Duncan et al. (2018) argue that ‘laypeople need to grasp two distinct, yet interrelated, aspects of evidence’ (p. 910): experts’ use of evidence (i.e., how scientific claims are generated) and laypeople’s use of evidence (i.e., how non-experts can use evidence to engage with science). Different ways of engaging with evidence are represented as five dimensions of the GOE framework. Four dimensions reflect experts’ evidentiary practices: analysis (identifying and comprehending components of scientific studies), evaluation (examining the quality of evidence), interpretation (examining the strength of evidence), and integration (identifying and weighing relevant evidence). The fifth dimension reflects laypeople’s use of second-hand reports of evidence (Sharon & Baram-Tsabari, 2020).

Within each of these dimensions, Duncan et al. (2018) use the AIR model of epistemic cognition (Chinn et al., 2014) to specify epistemic components of each practice. Epistemic Aims and Values (EAs) ‘are the kinds of epistemic products’ people ‘set to achieve (aims)… and the importance of those products (values)’; Epistemic Ideals (EIs) ‘are the criteria used to evaluate whether epistemic aims have been achieved’ and Reliable Epistemic Process (REPs) ‘are the diverse processes’ used ‘to achieve epistemic aims’ (p. 914).

Duncan et al. (2018) provide the EA, along with examples of EIs and REPs, for each dimension of the GOE framework; we referred to these to inform our analysis of students’ responses to the socioscientific scenarios. For example, the EA of the evidence evaluation dimension is ‘determining if evidence is of high quality and whether conclusions can be trusted’. An EI example is ‘Conclusiveness (i.e., ruling out confounds and alternative explanations for the findings), and an REP example is ‘Evaluating the appropriateness of study design (e.g., appropriate samples and comparisons)’ (p. 915).

3 Method

Data collection for this study took place in the context of a semester-long interdisciplinary course at a large public university in the western United States. The course aimed to help undergraduate students apply scientific-style critical thinking to make decisions about scientific and non-scientific issues by introducing concepts and principles that scientists have developed to generate and evaluate knowledge claims.

3.1 Data Sources

The primary data source for this study was transcripts of interviews conducted with 15 of the 95 students enrolled in the course. Students were selected by stratified random sampling, considering academic year and major, from the 72 students who agreed to be interviewed. Each student participated in a total of 3–5 one-hour video-recorded interviews, once every 2–3 weeks after the first one-third of the semester. In each interview, participants were prompted to respond to questions about scenarios designed to mimic everyday decision-making and to share the reasoning behind their responses.

Across the interviews, students responded to a total of 36 scenarios. Initially, we selected 10 scenarios (with a total of 96 responses) that (1) prompted students to evaluate claims and make decisions and (2) effectively elicited the reasoning underlying students’ evaluations and decision-making.

As described below, in this chapter, we focus on two of these scenarios: CFC and Chocolate (see Appendix). The CFC scenario asks students to discuss how a legislator would go about deciding whether to ban a type of chemical (CFCs). Students consider arguments provided by scientists (for a ban) and industry representatives (against a ban). The Chocolate scenario asks students to decide whether they would change their dietary habits based on a news report of a study claiming chocolate causes weight loss.

3.2 Data Analysis

In order to understand students’ critical evaluation of the socioscientific scenarios, we conducted an initial grounded theory analysis of students’ transcripts (Charmaz & Belgrave, 2012), along with a concurrent review of relevant literature, using two iterative steps. First, we open-coded idea units for features that appeared relevant to the cautious and informed evaluation of scientific knowledge claims, incorporating these features into an evolving definition of a construct we called ‘epistemic critique’. Second, we connected these features to relevant concepts from the literature. Towards the end of our iterative process, we recognized that Duncan et al.’s (2018) GOE framework captured much of what we sought to describe using ‘epistemic critique’, prompting our current investigation.

The two frameworks seemed to identify the same aspects of people’s evaluation of scientific knowledge claims; however, there was not a one-to-one correspondence between the GOE framework and our features of epistemic critique. Therefore, we conducted a new analysis of the CFC and Chocolate scenarios using the GOE framework in order to explore its utility. We chose these two scenarios for several reasons: (1) in our initial, iterative analyses, we had coded these two scenarios using the AIR framework (Chinn et al., 2014); (2) as compared to the other eight scenarios, they more explicitly elicited students’ understandings of evidence; and (3) features identified in responses to these scenarios corresponded to a range of components of the GOE framework.

We conducted a content analysis of responses to the CFC and Chocolate scenarios (13 responses to each) using Duncan et al.’s (2018) GOE framework: five evidentiary practices and three epistemic components, along with specific examples of the epistemic components within each of the five dimensions.

The two authors independently coded idea units in the interview transcripts using the GOE framework and then discussed to develop a shared understanding of the GOE framework in relation to our data, as well as consensus as to the applied codes. This process was aided by previous discussions of the interview transcripts as part of our initial analyses.

During the coding process, we recognized that the GOE framework did not fully capture students’ engagement with the scenarios. In particular, epistemic concepts—those required for evaluating the trustworthiness of knowledge claims—were not explicitly included, yet seemed important for describing how students evaluated SSI-related scientific knowledge claims. Thus, we added another grounded theory analysis. Similar to our initial analysis, we articulated epistemic concepts through an iterative process of coding and consultation of relevant literature.

4 Findings

In our data we identified evidence of students’ understandings of: (1) the two types of evidentiary practices (experts’ and laypeople’s) and (2) three of four dimensions of experts’ use of evidence. We further identified epistemic concepts that seemed to underlie students’ GOE and to account for meaningful differences in their evaluations of SSI-related scientific knowledge claims.

4.1 Understanding of Experts’ and Laypeople’s Use of Evidence

The GOE framework helps to illuminate whether students drew on understandings of experts’ or of laypeople’s evidentiary practices. We illustrate this difference with Evan’s and Tyler’s responses to the CFC scenario. While both students considered the scientists’ claims to be more trustworthy than the aerosol companies’ claims, they focused on different types of evidentiary practices. Evan attended to experts’ evidentiary practice:

If a large portion of the [scientific] community had independent studies, like if a lot of studies found the same result, I would be more inclined to believe it. then it’s much harder to deny or just to step aside and say we don’t know yet.

In this excerpt, Evan exhibited understanding of EI ‘Replicated evidence’ (Duncan et al., 2018, p. 916) from the evidence integration dimension by indicating that he would tend to trust claims that draw on replicated evidence from multiple studies.

In contrast, Tyler attended to laypeople’s evidentiary practice:

The aerosol industry, like they’d be so biased on like … I would definitely like lean towards the side of the scientists because like they’re more experts; they have like a better opinion. … This [what causes Ozone depletion] was never like a polarized issue. They [scientists] basically like discovered it. It’s not like they were polarized before and like were trying to figure out more about it.

Tyler exhibited understanding of EI ‘Source trustworthiness’ (degree of expertise, integrity, lack of bias, etc.)’ (Duncan et al., 2018, p. 916). In this excerpt, he considered both scientists’ lack of bias and status as experts. First, while identifying the aerosol industry as biased, he seemed to absolve scientists of similar bias, suggesting that—because scientists discovered Ozone depletion before it was a polarizing issue—their findings would not have been biased by the controversy presented in the scenario. Second, he identified scientists as having more expertise (‘they’re more experts’, they have a ‘better opinion’).

4.2 Understanding of Practices Within Scientists’ Use of Evidence

When attending to understandings of experts’ evidentiary practices, students focused on three of the four dimensions in the GOE framework: evaluation, interpretation, and integration. Overall, the Chocolate scenario prompted students to examine the study using understandings of the evaluation dimension, since the accompanying questions (see Appendix) focused students’ attention on the study design. However, as illustrated below, students also exhibited understandings of the integration and interpretation dimensions.

We use Asra’s example as representative of how all 13 students exhibited GOE in the evaluation dimension:

[T]hey have a control group; they have three groups actually in this case. And it seems like they did it right …[However], I’m still sceptical… A period of 21 days … that’s not enough… so, I would want to see this study replicated and possibly redone in different ways before I’d be ready to completely change my diet because of it. There’s only 16 adults in this, age what? 19–67, so I mean that’s a pretty good range age wise, but… you’ve got 16 people divided into three groups, … They haven’t even taken a random sample here. … You need to have… a larger scope; you can’t just be testing five people and assume that it’s representative of the population.

Asra’s concerns are indicative of the REP ‘Evaluating the appropriateness of study design (e.g., appropriate samples and comparisons)’ (Duncan et al., 2018, p. 915). Although Asra evaluated the study design positively in terms of the inclusion of a control group and the wide range of ages represented by study participants, she expressed scepticism due to the study’s duration (21 days) and the small sample size, noting that 5 people in a given treatment group would not be representative of the broader population.

Asra’s excerpt also illustrates the integration dimension. Like Evan (responding to the CFC scenario), Asra demonstrated understanding of the EI Replicated evidence by explicitly calling for the study to be replicated. In addition, her call for the study to be ‘redone in different ways’ suggests attention to the EI ‘Variety of evidence (i.e., multiple types/lines of evidence)’ and/or the EI ‘Consistency of support (i.e., lack of contradictory evidence)’ (Duncan et al., 2018, p. 916). It is unclear what Asra means by ‘redone’; however, her call for the study to be done ‘in different ways’ suggests understanding of the value of additional confirmatory evidence.

In contrast to Asra, James exhibited understanding of the interpretation dimension. James attended to the REP ‘Developing arguments that systematically connect evidence to models’ (Duncan et al., 2018, p. 915):

If they gave like a complete explanation of what the chocolate does to your body to make you lose weight, and maybe young people, I’d be less sceptical. … I would just be curious about what kind of chocolate, and what in the chocolate is actually making you lose weight. Even if I knew that the study was valid, I’d want to know why, biologically, like how that works… like a deeper explanation.

We interpreted James’s expressed desire to understand the mechanism behind evidence of chocolate’s effect on weight loss as related to an understanding of scientists’ work to connect evidence to explanatory models.

4.3 Epistemic Concepts Underlying Students’ GOE

In addition to illustrating dimensions of the GOE framework, we also unpacked students’ reasoning to identify epistemic concepts that seemed to underlie their GOE. Here, we describe two sets of epistemic concepts that appeared particularly important in students’ responses to the CFC and Chocolate scenarios: inherent uncertainty of scientific claims; and randomized controlled trial (RCT).

Two lines of work were especially relevant to our efforts to capture understandings of these two epistemic concepts: dimensions of reliability in science (Allchin, 2011), a framework for understanding the nature of science, and the concepts of evidence framework (Gott et al., 2015; Roberts & Johnson, 2015), which further specifies relevant concepts from Allchin’s framework by describing knowledge underlying understandings of scientific evidence. Inherent uncertainty of scientific claims is reflected in Allchin’s concept ‘error and uncertainty’ (p. 525); concepts of evidence further unpack uncertainty by describing how scientists present ‘confidence limits’ to ‘indicate the degree of confidence that can be placed on the datum’ and explaining what specific confidence limits mean (Gott et al., 2015, p. 7). Similarly, Allchin’s framework includes the concept ‘controlled experiment’, and the concepts of evidence framework explicitly defines ‘randomised controlled trial (RCT)’ as random assignment of a large sample to treatment groups, such that ‘confounding variables will even out’, leaving only the difference due to the treatment (Roberts & Johnson, 2015, p. 356).

4.4 Inherent Uncertainty of Scientific Claims

The concept of the inherent uncertainty of scientific claims appeared to underlie some students’ understanding of laypeople’s evidentiary practice, particularly regarding the REP ‘identify who the experts are, including level and relevance of expertise’. This could be seen, for example, in Brooke’s response to the CFC scenario:

One thing I liked about scientists for instance was the fact that they did admit, ‘Okay, there is this hole in the ozone layer, we don’t 100% know what it is, but we kind of think this could be one of the reasons’. While it’s like the other one [claim from the CFC companies] seems to be a lot more confident…, they can’t be that sure.

Her consideration of scientists as more trustworthy than the aerosol industry seems to reflect understanding that scientific claims, particularly predictive ones as in this scenario, are inherently uncertain and that reporting levels of confidence in such claims can be a strength of the scientific process.

In contrast, the scientists’ uncertainty made it difficult for Matt to trust their claim:

Well, … if they can’t exactly explain it because then it is really uncertain and it’s just hard to take …side with the scientists just because they’re saying, ‘We think this but we can’t say why we think this’.

In this excerpt, he did not exhibit the same understanding of the inherent uncertainty of scientific claims that Brooke exhibited.

4.5 Randomized Controlled Trial (RCT)

Concepts regarding RCT appeared to underlie some students’ understanding of evidence evaluation, particularly the REP Evaluating appropriateness of study design, as demonstrated in responses to the Chocolate scenario. For example, Brooke and Caren both identified the use of control and experimental groups as an important criterion for determining the effect of one variable (chocolate) on another (weight loss). Using this concept, both evaluated the design of the chocolate study to be sound.

However, they differed in their understanding of the other important criterion for RCT: random assignment to treatment groups, requiring a sufficiently large sample. As illustrated below, Brooke demonstrated understanding of this concept, and Caren did not. Brooke considered a large sample crucial for randomization and, thus, for determining the effect of the target variable (chocolate diet):

By randomizing it in like a bigger group, the odds are that we’re going to get people with the specific genetic conditions and some that don’t, some with these specific personal habits, some that don’t. So in the overall like larger scale, these things are probably going to cancel out through randomizing … just like keeping everything intact and just changing one variable.

In contrast, Caren considered randomization into control and treatment groups to be sufficient (despite the small sample size):

They did try to randomize the groups. And they did intervene actively on the groups … So, I mean that’s a good thing they did there …The sample size is a little small … Maybe it would be good to at least start out with … It may be an advantage. … it may be good to start… Have a smaller group.

As these examples illustrate, specific epistemic concepts could be identified underlying the GOE that students exhibited. In particular, Brooke and Matt illustrate how the epistemic concept of inherent uncertainty of scientific claims may underlie the layperson’s REP Identify who the experts are, while Brooke and Caren illustrate how epistemic concepts related to RCT may underlie the evaluation REP Evaluating the appropriateness of study design. In both cases, Brooke demonstrated understanding of the epistemic concept, while her counterpart did not. The contrasts between Brooke and Matt and Caren, respectively, provide some indication of how understanding of epistemic concepts may affect students’ evaluations of SSI-related scientific knowledge claims.

5 Discussion

Drawing on our findings, we discuss both how the GOE framework was useful in analysing our data and how it could be further unpacked to increase its utility. First, the GOE framework allowed us to make important distinctions among students’ understandings of different evidentiary practices. By distinguishing between experts’ and laypeople’s use of evidence, the GOE framework brings attention to the lay use of evidence. In our data, students engaged with both types of evidentiary practices when considering socioscientific scenarios. Although students’ engagement in the practices of scientists (as advocated by current science education reforms, e.g., NRC, 2012) is vital to their understanding of scientists’ use of evidence, engagement with SSIs may be important for developing students’ understanding of laypeople’s use of evidence and, thus, for empowering students as competent outsiders who use science wisely in their daily lives.

In addition, the GOE framework allowed us to distinguish the practice of evidence evaluation from other expert evidentiary practices: evidence interpretation and evidence integration. GOE associated with evidence evaluation can be seen in traditional images of students’ engagement in science inquiry, which emphasise experimental investigation and, thus, understandings related to ‘controlling variables’ and ‘identifying sources of error’ (NRC, 2012, p. 43). By calling attention to other evidentiary practices, the GOE supports attention to other aspects of scientists’ work—such as modelling (evidence interpretation) and working with evidence collected by others (evidence integration)—and, thus, to the importance of providing opportunities for students to engage with a range of scientific practices. As demonstrated by the different understandings students used to evaluate socioscientific scenarios, understandings related to these practices are important for engaging with scientific claims, not only as scientists, but also as citizens.

Second, although we found the GOE framework useful for making distinctions among different uses of evidence, unpacking the epistemic concepts underlying the framework may increase its utility. Brooke used the same components of the GOE framework as did her counterparts Matt and Caren, but Brooke drew on specific epistemic concepts to support her more well-informed evaluations of the socioscientific scenarios. Her understanding of the uncertainty of scientific claims allowed her to resist the aerosol industry’s critiques of the scientists’ claims in the CFC scenario, and her understanding of criteria for RCT allowed her to recognize a fatal flaw in the design of the chocolate study. In both cases, these epistemic concepts would allow Brooke, as a competent layperson, to avoid being fooled by those attempting to use science to persuade her. Although epistemic concepts may be inferred from the GOE framework, these concepts need to be unpacked so that students’ opportunity to develop GOE is not unduly dependent on teachers’ ability to make these inferences.

In our study, we unpacked only some epistemic concepts—and illustrated even fewer in this chapter. Our engagement with relevant literature suggests that others’ frameworks may be useful for further articulating epistemic concepts underlying the GOE framework. For example, Gott et al.’s (2015) concepts of evidence describe ‘a body of knowledge which underlies an understanding of scientific evidence’ (p. 1), but at a much smaller grain size than that of the GOE framework. By articulating, in more detail, understandings associated with specific components of evidentiary practices, the concepts of evidence framework could be used to fill in some of the epistemic concepts underlying the GOE framework.

6 Conclusion

This study provides an empirical exploration of the utility of the GOE framework. As Duncan et al. (2018) suggested, we see implications of the framework (as well as our proposed unpacking of the framework) for researchers and science educators. First, the GOE framework seems useful for describing and distinguishing among understandings of different evidentiary practices, calling attention to practices that receive less emphasis in current educational settings. Focus on laypeople’s use of evidence and on experts’ interpretation and integration of evidence has the potential to allow researchers to learn more about and teachers to provide more support for students’ understanding of these practices. Second, we suggest that epistemic concepts should be explicitly included in studies of and efforts to support students’ GOE. Future studies could more systematically investigate how epistemic concepts relate to students’ GOE. Such studies could consider a wide range of epistemic concepts, useful to engage with a variety of different SSIs. We hope that this empirical study will contribute to further investigations of, and support for, students’ informed engagement with SSIs and, ultimately, decisions that are personally and societally beneficial.