Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Socio-scientific Reasoning

Sadler: As I reflect on the project featured in this chapter while simultaneously considering the future of my own work in the area of SSI and the SSI research agenda more generally, I am drawn to the socio-scientific reasoning (SSR) aspect of the project. In some ways, this part of the project was not very successful. First, the students demonstrated no gains in SSR. And second, the subcomponents of the larger SSR construct (i.e., complexity, inquiry, and perspectives) did not show evidence of association to an underlying latent variable. In other words, these data suggest that the SSR aspects ought to be treated as independent variables as opposed to related subconstructs. However, despite these results, I think the work around SSR may be the most important contribution of the project. As we mention in the chapter and elsewhere (Sadler, Barab, & Scott, 2007; Sadler & Zeidler, 2009), I think that there is a real need for tools to help us as researchers and educators better operationalize what it is that we are trying to do with SSI. I do not think it is enough for us to continue to argue that we need to enhance scientific literacy. I do not think that anyone contributing to this book would disagree that promoting scientific literacy is important, but given the political climate in which schools are currently situated, I think that our community (i.e. those of us who advocate the contextualization of science education through SSI) has a responsibility to move beyond the rhetoric of scientific literacy as a rationale for SSI-based education.

Do others agree with the assertion that the community needs more sophisticated conceptual and assessment tools for supporting and justifying SSI-based education? Is SSR a useful construct in this regard? Does the discussion around this theme within the chapter provide useful insights? What steps should be taken to advance this work?

Zeidler: Let me first say that this study provides many key insights into ­important areas of SSI instruction and reasoning. The study advances the claim that even a limited experience with SSI can produce conceptual learning outcomes in students. More directly related to the question raised as to whether the science education community needs more sophisticated conceptual and assessment tools related to SSI-based instruction, my response would be “yes,” though not necessarily for the premise of “supporting and justifying” the practice of SSI pedagogy. It is my observation that the SSI framework has been amply supported and justified both on analytic and empirical grounds in previous research. However, I do agree that we need more sophisticated conceptual tools to better understand the nuances of factors associated with SSI reasoning and more robust assessment tools that examine both contextual and cultural differences in reasoning about SSI. I think this study moves us a good step in the right direction.

Ironically, as I sit here writing this in May 2010, I cannot help but to think of the human and environmental crisis unfolding in the Gulf of Mexico with the BP oil spill (although the word “spill” seems like a quaint euphemism where we only need to dab up the offending toxic pollutants). This is at once a Real-World Problem and a Real-World catastrophe in every sense. It is also, obviously, a socioscientific issue. In this reality, it probably matters not whether to treat Socioscientific Reasoning (SSR) as a unitary construct or as independent variables (although for theory development it certainly does); rather, it seems clear to me that at this very moment, we can find the need for students to be able to orchestrate the components of SSR—understanding complexity of a SSI, examining issues from multiple perspectives, realizing the need for ongoing inquiry, and evoking skepticism when presented with potentially conflicting and/or biased information, of critical importance.

Sadler: I want to challenge Dana’s contention that “the SSI framework has been amply supported and justified both on analytic and empirical grounds.” In many respects, I agree with the statement: many studies (including those featured in this volume) provide empirical support for the use of SSI as contexts for science education. The SSI movement seems to be growing in that there is evidence of increased classroom use of SSI as well as more frequent SSI contributions to the science education literature base. However, in discourse at international and (some) national levels, SSI remain marginalized. With the parenthetical insertion in the previous sentence, I am hedging a bit because I know that some national educational systems are incorporating SSI in substantive and meaningful ways, but in our national context, the USA, SSI are not a prominent element of the national discourse around science education. My analysis of current policy and standards document in the USA leads me to think that SSI receive, at best, lip-service in framework-type ­statements. When those frameworks become translated into standards and benchmarks that become reified in standardized assessments which drive system-wide (local and state school systems) decisions regarding curriculum and pedagogy, the focus on all that is significant with respect to SSI falls away. It is in this context that I contend that our community has much to do in terms of justifying the SSI ­framework as an important aspect of science education.

A reasonable question in response to this argument in light of the previous ­comments regarding socio-scientific reasoning is how might a new construct (i.e., SSR) help to justify use of SSI in science education. I think that if we want to advance the SSI movement in terms of making SSI a more prominent aspect of science education, then we need conceptual tools to help translate the lofty goals that often feature SSI-related themes in policy frameworks to the standards, assessments, and curricula that ultimately get enacted. Currently, vague links are made between teaching science in the context of SSI and learner development of scientific literacy and reasoning. But scientific literacy and reasoning can mean any number of things, so assessments and curricula focus on scientific formalisms (i.e., facts and principles) that have been clearly defined and are uncontroversial. In proposing socio-scientific reasoning, we sought to be more precise in identifying a specific suite of practices that could be featured in SSI-based learning experiences and assessed. If we want to move science education systems from an exclusive focus on scientific formalisms, then we have to provide options that fit within the political constraints of those systems. Scientific literacy and reasoning (presented generally) do not fit within the current political constraints because they are ambiguous and practically impossible to assess, at least when they are presented as ambiguously as they are in policy documents. By specifically defining socio-scientific reasoning in terms of measurable subconstructs, we were attempting to create a construct that would fit within the political constraints of modern school systems and better position SSI within those systems. This is why we viewed the finding that the SSR subconstructs were not correlated as being problematic. If the subconstructs are not related then it challenges our definition and ultimate use of the socio-scientific reasoning construct. However, despite the results, I am not completely convinced that the aspects are not related. It seems unlikely to me that how individuals think about the complexity, inquiry, and skeptical aspects of a particular controversial issue are not related. It seems more likely to me that our approaches to measuring these aspects are not sufficiently valid and reliable which, of course, demands additional study.

Eastwood: I certainly agree that the goals of SSI do need to be defined beyond “scientific literacy,” and that better instruments for measuring these outcomes are needed. I agree that the construct of socioscientific reasoning has great potential for this and can facilitate the development of effective assessment tools. I see the three-part construct as extremely consistent with King and Kitchener’s reflective judgment model. It does make sense to think that the subconstructs of perspectives, inquiry, and complexity would be related, but I can understand why they may come out as separate constructs. For example, students might be inclined to discuss ­different perspectives on an issue because they were instructed to do this in class—they know to look for multiple perspectives. They could easily still be confused about how scientific knowledge is developed. I wonder if subconstructs would be more related in adults/college students, since clear patterns emerged with these groups in King and Kitchener, Perry, and Baxter Magolda’s work. I also wonder if classroom scaffolding favoring certain aspects affects the outcome. It seems important to consider how the aspects are addressed explicitly and contextually in ­classroom discussions.

Dawson: It has been interesting to read this chapter again and in the same week as the Gulf of Mexico oil spill. I recently received the International Journal of Science Education issue containing the Klosterman and Sadler (2010) paper which outlines the benefit of SSI such as climate change and its role in enhancing conceptual development. I want to comment first on SSI in curriculum documents and second, ways of improving the teaching and assessment of SSI. Twice, I have heard quotes that the teaching of SSI is “woolly science”. The first is apparently from the new Minister for Education in the UK suggesting that this “type of science is woolly”. The second was on the front page of our local (Perth, Australia) paper when our new national science curriculum was released and our local Nobel Prize laureate in Medicine was asked to comment on the curriculum. He also described parts of the science curriculum as being woolly. The section he referred to was “­science as a human endeavour”. Interestingly, he is actually mentioned by name as an example of a scientist in that section. The point is that many of the people who decide what is taught in our schools are not science educators and have a ­narrow view of what school science is.

It seems there are two audiences that the outcomes of research in SSI need to reach: (1) power brokers and curriculum writers in central offices and (2) teachers in schools. I was involved in writing a new biology curriculum which included many aspects of SSR such as multiple perspectives, skepticism, evaluating risk, etc. By the time the curriculum was published, these SSR-related elements had all been removed mainly because they were considered too difficult to assess or too far removed from “real science”. However, when I speak at teachers’ conference, I receive warm receptions from many teachers who are keen to make their lessons more interesting, relevant, and contextual.

Sadler: After reading Vaille’s comments, I checked an online dictionary to make sure that I knew what “woolly” meant; it is not a term that I hear very often. The most pertinent entry presented the following definition: “lacking in clearness or sharpness” (Merriam-Webster, 2010). I certainly disagree with pronouncements that SSI-based education is “woolly” in the pejorative sense evident in the quotes that Vaille mentioned. However, I do think that our approaches to defining learning outcomes and assessments in the context of SSI have lacked clearness and sharpness. The socio-scientific reasoning construct may help bring these issues into better focus. Even if the construct does not end up yielding fruitful results, I hope these discussions help the community attend to these issues and better address the ­perceived shortcomings of SSI-based education.

Zeidler: Not to add any more ad hominem arguments to those woolly-headed reactionaries who cannot conceive the notion that science education may exist in a social context, SSI, I believe, does have ample support and justification. The question may be, who is listening? Now by this I do not mean to imply that the SSI paradigm is now “normal science,” and we merely are left with “mopping up operations” to tidy up a few loose ends. And I would agree that it is incumbent upon us—those that are advocates for this progressive scheme—to add clarity, refinement, and where necessary, dismantle and reintroduce more robust ideas about how to engage children in the activity of science, facilitate public understanding of science through the everyday use of SSR, and provide better indicators of the effectiveness of this approach. Questions obviously do remain as to whether the SSI approach is compatible with standardized assessment (as Troy alludes to) or whether models of authentic assessment may gain a foothold in the political hegemony of education (as Vaille seems to suggest). However, I do think there exists some promising protocols to help document progressive classroom environments where SSI would flourish. For example, a modified version of the NCOSP Science Classroom Observation Guide (2008), which seems sensitive to observational records of classroom inquiry dynamics and growth consistent with contemporary science education goals, is promising because it allows for the identification of “indicators” of practices that can be observed in effective classrooms, while providing a differentiation of evolving practices for teachers to become more informed over time in providing student support in the learning of science. The overarching categories (that are broken down into numerous subcategories of classroom instruction) include: Classroom Culture is Conducive to Learning, Science Content is Intellectually Engaging, Instruction Fosters and Monitors Student Understanding, and Students Organize, Relate, and Apply Their Scientific Knowledge. I am not suggesting that this particular protocol is the answer, but I am suggesting that conceptualizations of authentic assessment may, on the one hand, be realized; on the other hand, such assessments may be fundamentally at odds with the type of large-scale assessments (e.g., PISA) that are so prominent within our current system.

Research Design

Klosterman: Over the last year, we have received extensive feedback on the CATSI project. One commonly expressed concern is the lack of a control group in our research design. We acknowledge that control groups are ideal in most educational research. However, as we expressed in our paper (Klosterman & Sadler, 2010), we were not working in an experimentally ideal situation, but “in the situated world of real schools” (p. 1040). Given our local context, limited available time, and desire to work closely with a limited number of teachers, finding another classroom that was similar in terms of size and population was only one issue. In consideration of our limitations, finding another classroom that was addressing the same standards, in the same time frame, and one that did not align with or highlight any SSI was practically improbable.

We contend that the significance of our study lies in the fact that we now have empirical evidence to support our hypothesis that a SSI-based curriculum can impact student content learning. We did not investigate if a SSI-based curriculum can improve content learning MORE than a curriculum void of a SSI focus, in which case we acknowledge a control would be required. To our knowledge, previous SSI research has not looked at student learning gains (both proximally and distally) as a result of a classroom-implemented SSI-based curriculum. This type of research is critical. To influence the policy-makers and other educational gatekeepers, we need to continue to push the SSI agenda forward with concrete evidence of its impact on student learning.

Eastwood: As long as you are not trying to compare improvement in content learning to a traditional approach, I would not say that a control group is “lacking.” Since there are so many variables to consider in finding reasonable comparison groups, it makes sense for a study that takes a more in-depth approach to assessment to focus on the students receiving the intervention. The approach using different “distances” of assessment is very useful to identify how and what students are learning, especially since particular assessments tend to favor one group or the other depending on teaching strategies and assessments used in the classroom.

I would say not having a control is justified with more in-depth case studies, those that take into consideration the situational aspects of a complex learning environment. To me, it was effective to have more detail than previous studies on how students gained content knowledge in the SSI intervention. When the description of the intervention and results are given in detail, the reader can make reasonable inferences about how big or small these content gains are in relation to other teaching approaches.

I do not believe the strength of the findings is limited if you are arguing that SSI promotes content learning (you showed this in a very convincing way). This is appropriate for a study using a novel approach in assessing content gains with SSI. I do think repeating the intervention with other groups could increase influence with stakeholders, strengthening the argument of other authors (especially given the various distances of assessment) that content learning is not compromised in SSI.

Dawson: Using a control or not is a very tricky point. You [the chapter authors] did argue convincingly as to why a control group was not possible in the research. Like all good qualitative research reports, you provided a great deal of contextual information and allowed the reader to decide the verisimilitude of the research. It is worth noting that different reviewers for the IJSE paper may well have rejected the paper. I approve of research that acknowledges the ‘messiness’ of real classroom research and it is time we, the science education community, were more honest about the nature of the work we do. The more necessarily complicated, classroom-based research is presented at conference and in journals by reputable researchers like yourselves, the more it will be accepted.

Benefits of Research

Dawson: One of the desired outcomes of quality research can be gains in understanding by the participants. I would like to ask the authors what they think the participating teachers and students gained as a result of participating in this research and how they know this. I would also like to ask what they found to be the most rewarding aspect. Finally, if they did this study again, what would they do differently?

Sadler: The student participants developed new understandings of global ­climate change and the scientific concepts underlying this issue. The data presented in this chapter provide support of this claim in that we document statistically significant gains in student performance on a standards aligned test as well as qualitatively distinct shifts in understandings on curriculum-aligned assessment prompts. This is the easy answer to Vaille’s first question, but I do not think that it really gets to the point that she is raising. Vaille is asking about the benefits of teacher and student involvement in the research process. For this question, we have less compelling evidence because generating this kind of evidence was not one of our primary goals. We had not thought a lot about this issue beyond an expectation that the partnering students would learn some science and become better prepared for dealing with complex SSI and that teachers with whom we worked would become more comfortable using SSI in substantive ways in their classrooms. To help address this question, I asked William, one of our partner teachers, to share his thoughts on what he and his students might have gained through their participation in the project. (After the school year during which CATSI was implemented, Molly moved to another region of the country and we did not stay in contact.) William reported that his students expressed genuine enthusiasm at being a part of a research project. Many of the students felt empowered because they were contributing to something “bigger” than their typical classroom experiences. They asked questions about how their tests and information would be used and what we might learn from the results. In comparing the seriousness with which the students approached assessments associated with the project and their normal approach to classroom assessments, William felt confident that students exerted a level of effort and ­sincerity not usually observed.

In terms of the influence of project participation on William himself, he indicated that the experience made him more interested in educational research. At the time of the project, William was in the midst of completing a specialist degree and was considering continuing on to earn a doctoral degree. By the end of the project, William had become very interested in research and the potential roles he could play in conducting science education research. Since the CATSI project, William has continued his graduate studies and he is currently a full-time Ph.D. student. He has developed a research agenda associated with science learning in the context of authentic research opportunities.

Vaille also asked about what we, as the researchers, found most rewarding in the project. For me, the opportunity to collaborate with the teachers as extensively as we did and to be in their classes as they worked through the curriculum that we jointly developed was a great experience. For several years, I have been working with other researchers to develop empirically based understandings about how students negotiate SSI and how to situate SSI in classrooms. I visited classes and interacted with students and teachers, but this project allowed me to collaborate with teachers at a new level. In many ways, it was an opportunity to put much of what we had learned into action. I found it to be challenging but also fun and rewarding.

The last question that Vaille posed challenges us to reflect on what we might do differently if we did the project again. I have two things that I would do differently; although, only one of the two would have actually been possible given the constraints we experienced. The first suggestion, which we could have accomplished but did not think to do so at the time, would have been to collect data that could have informed questions related to student interest and motivation. SSI literature consistently claims that students become more interested in science when they can explore it through contextualized learning opportunities like SSI. I would have liked to have developed strategies for collecting student level data related to this issue. Surveys or focus group interviews focused on these topics would have been relatively easy to conduct and may have yielded valuable insights. The second change that I would like to have implemented but could not given the constraints of the specific contexts within which we worked is administering a follow-up assessment several months after the conclusion of the CATSI unit. Students showed content gains on pre/post-tests given immediately before and after the unit. A follow-up test administered 3 months after the unit would have enabled inquiry into the long-term effects of the experience. As we mentioned in the chapter, timing of the implementation and the need to work around the school and class calendars made this kind of follow-up testing impossible.

Dawson: Just one brief comment about follow-up tests: A concern I have is that students may have assessment fatigue if asked to write too much about one topic even if they are happy to participate in the research. Some students may write less on the premise that they have already told you the answer. Of course knowing whether gains in learning are sustained is important. Maybe we have to be creative about how we find this out. The other important point is whether students who have a greater understanding of climate change issues actually change their behavior in any way.

Klosterman: As Troy mentioned, we admittedly lack evidence to support any claims about how students or teachers benefited from our research in terms of interest or motivation. Nor do we have interview or survey data about the impact of our study outside of content learning. Student content learning and socio-scientific reasoning were the foci of our study and therefore drove our research design. However, this study reminded me of the power that comes from collaborating directly with teachers on projects that immediately impact their classrooms and student learning. For me, this was the most rewarding aspect. As a result of this study and through direct collaboration with teachers, we developed a tangible product that was immediately usable by teachers and was loaded with science content and highlighted the social, economic, and political aspects of global climate change. At the most basic level, the teachers benefited by working as a part of a team to develop this product that was theirs to keep.

In my opinion, having conversations about the theoretical underpinnings of our work and future research possibilities is stimulating conversation. But I believe that our work is truly limited if it does not clearly translate to classroom practice. In this study, we worked with teachers from the outset. We worked together to choose a SSI that was relevant to the teachers’ classrooms and spent a considerable amount of time designing lessons that aligned to the course, state, and national standards and personal teaching styles. In fact, at times it felt like we were participating in a classroom service project rather than a research project because our efforts were so focused on the tangible curriculum. But to me, that makes sense. Ultimately, the goal of research is to improve student learning. The closer we can get to students and to the teachers that work with them, the more likely we are to impact student learning.

Although I felt like this project was certainly a step in the right direction, I feel like we still could have pushed the envelope farther in terms of the practical utility of our research findings. The results of this study obviously contribute to the science education research field and its understanding of how SSI-based instruction can impact student learning. However, what did the teachers gain from these findings? We mentioned that the teachers were admittedly less involved in the design of assessment instruments than in curriculum development. We did not make a deliberate decision NOT to involve the teachers; the timing and amount of effort required to do so made it impractical. However, I am left to wonder what the teachers would have thought of the students’ responses to the proximal (curriculum-aligned) assessment and their ability to make connections to the broader scientific concepts on the distal (standards-aligned) assessment. Would teachers have used those results? And if so, how?

Dawson: The benefit of educational research to the participants is something that I ponder often in my classroom-based research. When the focus of the research is related to morals, ethics, values, and multiplicity of views, then it becomes even more pertinent. We would like students to consider other stakeholders’ points of view, show empathy so it is important that we do the same.

In regards to motivation and enjoyment, certainly observing classes where ­students are debating SSI, the excitement is palpable (perhaps difficult to collect evidence about though). In addition, this motivation may well be one of the reasons that students’ conceptual understanding improves when SSI are used even if not as much time is spent on learning content.