1 Introduction

Decision-making is fundamental to choosing, among several available alternatives, the one that will produce the optimum result. This selection becomes complicated when there is uncertainty about the consequences of the options and associated actions, especially when these consequences may be harmful or even catastrophic to, for example, health or the natural and social environment, and in general to personal and social values.

Everyone has experienced the problem of uncertainty, both in familiar domestic circumstances relating, e.g., to health, work, family, medical opinions, judicial decisions, and in decisions relating to more global societal questions like climate change, managing energy resources, mass migration, and the application of radical and powerful new technologies.

In the Cambridge dictionary (https:dictionary.cambridge.org), for example, uncertainty, is defined as ‘a situation in which something is not known’, e.g., the effects of a drug, or the outcomes of specific activities. In the epistemic literature definitions are given for specific kinds of uncertainty, e.g., for the scientific and axiological uncertainty (see p. 9), and in specific contexts, e.g., in the context of decision theory (the decision-theoretical definition of uncertainty, see p. 6). Uncertainty is always involved and characterizes various other related terms and states, such as unpredictability, indeterminacy, ignorance, and risk (Klinke & Renn, 2002; Kozyreva & Hertwig, 2021). Uncertainty is thus a condition, a cause that generates risk; it is, in other words, a foundational, a principal element of risk.

Risk is a word frequently used in our everyday activities and a key term in the different risk disciplines and decision theories. In its usual sense it is connected with the idea of fear, harm, and threat, with a possible undesirable condition that might result from activities with imperfectly predictable consequences (Hansson, 2005; Kinouchi, 2018). In recent decades risk has been an object of study of increasing interest in a wide range of research fields, including statistics, economics, the social and natural sciences and the philosophy of science, and more specifically in the fields of ‘Risk Analysis’ and ‘Decision Theory.’Footnote 1 In these last two areas, understanding and handling risk in the decision-making process is a major challenge and the subject of on-going debate, criticism, and evolving conceptions, as we shall see in the following sections.

Contemporary societies, scientifically/technologically oriented, industrialized, and globalized, face increasing uncertainties about their future and must deal with pressing risk-related decision problems, particularly with regard to environmental and natural resource management, but also in connection with the applications of new and potential technologies that, as experience has shown, may have negative or catastrophic consequences for health, for the environment, and for national and global equilibria (see Beck, 1992). The implementation of such technologies and more generally a technology-based way of life create complex and unpredictable risk in contemporary societies — which in fact are described by Beck (1992) as risk societies.

Democratic societies have increasingly understood and accepted that dealing with such risks requires, apart from scientific data, the positive participation of the persons or groups affected and of the general public, not only because scientific data relating to the problem may be non-existent or insufficient or ambiguous, but also for the sake of achieving just and durable solutions. Consequently, the management of contemporary global risks especially needs citizens capable of constructive participation in democratic discourse and in collective decision-making and action (Birdsall, 2022; Elmose & Roth, 2005; Hansen & Hammann, 2017; Klinke & Renn, 2002; Kozyreva & Hertwig, 2021). Science education has a role to play and a contribution to make in the cultivation of such skills, and the subject of risk should therefore occupy an important place in the content and goals of scientific literacy, so as to promote the preparation of future citizens and researchers able to handle questions and dilemmas in conditions of uncertainty, as is the case with most of today’s socio-scientific issues (SSIs). In this context it has been widely argued that science education ought to include or reinforce basic programs on generic risk and risk analysis and management (e.g., Christensen, 2009; Tuana, 2010; Hansen & Hammann, 2017; Aven & Flage, 2020 Schenk et al. 2021). However, risk does not have the place it deserves in science programs and teaching. This may be partly due to the traditional focus on teaching mainly certain, well-established knowledge, on decontextualized textbook problems and on the epistemology of a general certainty of scientific knowledge, an epistemology that is insufficient and confusing when it is about solving real-world problems and dilemmas. On the other side however, contemporary innovative approaches in science education that are interested in the social dimension and relevance of science education (SSI-, argumentation-, NOS-, and scientific literacy–oriented approaches), increasingly recognize the importance of risk and argue for introducing it as an explicit focus in science education (see in Sections 6.1 and 6.2).

Risk as a topic began to appear in science education literature essentially in the mid-1970s. Early articles focused on the importance of the subject for society and education (e.g., Howes, 1975), and later on analyses and elaborations on the concept of risk and on risk assessment. Their authors emphasize the importance and implications of addressing uncertainty and risk in the science classroom, and specifically with regard to the conception and teaching of the nature of science (NOS) (e.g., Christensen, 2009; Millar, 2006; Osborne et al., 2003; Ravetz, 1997; Ryder, 2001), student reasoning and decision-making and citizenship, the role of values in assessments of risk and related decisions (Cross, 1993; Eijkelhof, 1986; Kolstø, 2006; Schenk et al., 2019), and SSI-based teaching (Christensen, 2009; Genel & Topçu, 2016; Hansen & Hammann, 2017; Ratcliffe et al., 2005; Schenk et al., 2021).

For example, Christensen (2009) presents a global, comprehensive analysis of the nature and handling of risk and the importance of teaching these topics in relation to a broad spectrum of science education goals and teaching approaches (students’ NOS understanding, decision-making abilities, and citizenship). She proposes a combination of realist and constructivist perspectives of risk (see Section 2) as a framework for teaching on understanding and assessing risk and for promoting an informed citizenry capable of coping with contemporary risk-related issues. Hansen and Hammann (2017) further develop the realist and constructivist approach to risk and suggest that linking these approaches offers a working framework for boosting risk competence in science education. Moreover, they identify three key components for supporting students’ risk competence in the context of teaching about SSIs: knowledge about generic risk and basic statistics, knowledge about the nature of science (including scientific uncertainty and the social dimension of science), and risk assessment (including cost–benefit analysis together with ethical considerations). Schenk et al. (2019) introduced more information, for example, about the context-dependent nature of risk and the resulting variability in the conceptions, estimations, and management of risks, about the role of values in risk-related decision-making, and about the dual nature of risk — its subjective and objective aspects. Based on the literature of risk analysis and science education, they constructed a schematic model for teaching about risk, including four core elements for defining the concept of risk (uncertainty, probability, severity, and consequences), and three core elements for decision-making on issues involving risk (knowledge, values, activity). Shenk et al. (2021) reviewed SSI-based teaching publications to examine the extent of their treatment of risk, which ‘showed that few of the publications investigated engage with the concept of risk and the methods of risk analysis’ (p. 2216). In the light of these findings, they discuss the difficulties that teachers face in teaching risk and propose ways to reinforce teachers’ risk education, among them the availability of teaching material and guidance, curricular support, and teacher collaboration with experts on risk analysis. Genel and Topçu (2016) also regard aspects of risk analysis (such as statistics, probability, and the interaction between possible risks and values) as a vital part of SSI teaching and the ability to teach them as one of the main competences needed for teaching SSIs.Footnote 2

We can say that the three basic component topics for covering the subject of risk are the definition, the assessment, and the management of risk, given that these are the basic and common objects of study in the articles of risk-related literature, which focus on the one or the other topic (see, e.g., in Hansson, 2005; Aven, 2018; Kinouchi, 2018; Aven & Flage, 2020; Rechnitzer, 2022). The science education studies cited above focus chiefly on the first two (definitions of risk and risk assessment), while the risk management process is not sufficiently integrally developed. Where it is mentioned, it is restricted to the cost–benefit analysis decision model, which, as we shall see, is applicable mainly to decision-making in conditions of limited uncertainty and which also raises socio-ethical issues. The authors of these articles take this latter aspect seriously into account, proposing a cost–benefit analysis model (see in Section 4) together with social-ethical considerations. They do not, however, further elaborate the strategies that incorporate a socio-ethical dimension and are used today to handle decision problems involving serious uncertainty and serious risks, such as those that characterize contemporary global socio-scientific issues, and specifically precaution-based and discourse-based risk management strategies. For example, Christensen (2009) refers briefly to precautionary risk management and provides a definition of it, and Kolstø (2006) identifies precautionary argument as one of the main reasoning patterns in students’ decision-making on a controversial SSI. What is needed is a more comprehensive analysis, and comparison with conventional approaches, of the ways of thinking, the possibilities, the aims and the frame of application of precaution-based and discourse-based approaches to risk, approaches with which students should become familiar in view of the serious uncertainties and dilemmas of the age. This gap is signaled by the authors with their proposals for further research into the argumentation and decision-making strategies for risk-related situations, and for the development of teaching material and guidance for teachers’ and students’ risk education (Hansen & Hammann 2017; Genel &Topçu (2016); Schenk et al., 2021.

This article deals with all three components of the subject of uncertainty and risk, with a view to extending and filling gaps in the risk-related science education literature, especially as regards risk-related decision-making (risk management), a topic insufficiently covered in science education. Our aim is to contribute to the sound theoretical foundation science education needs in order to promote an informed understanding for handling uncertainty and risk by providing a comprehensive reconstruction and synthesis of the basic concepts, accounts, and recommendations developed in the literature on the subject. We also discuss interconnections and implications of this article’s analyses with contemporary educational approaches and scientific literacy goals.

In Section 2 we present a synopsis of the formal and informal definitions of risk and the realist and constructivist approaches to the subject, drawing on risk-related analyses and decision theories. In Section 3 we describe conceptions and the (probabilistic) risk-decision rules of classical/conventional decision theory and the breadth of the risk situations in which it can be effectively applied. In Section 4 we first discuss critiques of the conceptual and ethical limitations of these probabilistic models, and then we describe the thinking, the aims, the possibilities, and the conditions of application of the precaution-based and discourse-based approaches developed for decision-making under high uncertainty and severity of risk. We also discuss kinds of discourse (epistemological, reflective, and participatory discourse) needed to deal with the epistemic and socio-political disagreements that result from uncertainties and ambiguities of risk-related scientific knowledge. Based on analysis and comparison of the risk-management strategies of these three approaches (classical/conventional decision theory and the precaution-based and discourse-based approaches), we conclude that the proposed ‘analytic-deliberative’ approach (Clahsen et al., 2019; Klinke & Renn, 2002) provides a sound basis for risk evaluation and management and for teaching these topics. It makes use of all these strategies and kinds of discourse, depending on the degree of uncertainty and the seriousness of the potential dangers of the situation to be handled. In Section 5 we develop two topics that we regard as of special importance for deciding under uncertainty and which ought to be treated in science education. The first has to do with the criteria for determining the credibility and severity of alleged risks, so as to avoid an over- or underestimation of them that would lead to the adoption of wrong decisions and responses. The second concerns approaches and attitudes to technological progress and innovations and their risks. This topic acquires particular significance in view of the global risks of new technologies and the search for alternatives. In Sections 6 and 7 we give a comprehensive summary of our survey of the subject and discuss its educational implications and ramifications.

2 Definitions and Perspectives of Risk

In everyday usage, the word risk ‘refers, often rather vaguely, to situations in which it is possible but not certain that some undesirable event will occur’Footnote 3 (Hansson 2004, p. 10). In academic fields risk is also used in an informal sense, to signify ‘an unwanted event which may or may not occur’ (Hansson, 2004, p. 10) or as the cause or probability of an undesirable event (e.g., earthquakes are a major risk for those living in earthquake zones). In the technical contexts of professional risk analysis, however, the standard use of ‘risk’ is as a quantified entity equal to the product of the probability of its occurrence and its severity, where severity is some measurement of the amount of damage the risk may provoke; this product is called the statistical expectation value of an unwanted event (see, e.g., Hansson, 2018, p. 2).

The underlying idea in this probability-based definition is that uncertainty can be quantified and measured, and thus perhaps grasped using statistical reasoning and the calculus of probability (Kozyreva & Hertwig, 2021). Uncertainty as to the result of an activity can be assigned a value from 0 (impossible) to 1 (completely certain) depending on the available information or supporting evidence for the occurrence of its potential outcomes (costs and benefits), while the standard measurement for the severity, e.g., of an accident is the statistical number of injuries and deaths (Hansson, 2004, 2005; Kozyreva & Hertwig, 2021). This reduction of uncertainty to probabilities is helpful to decision-makers for comparing and prioritizing risks, weighing the costs and benefits of potential risk-takings, but, as we shall see in Section 4, it is open to a number of criticisms, among others that it disregards important aspects that ought to be taken into account in assessing risk.

In this quantitative estimation of risk, both probability and severity are meant to be based on empirical evidence, in other words on the physical characteristics of risk. It views risk from a realist perspective, seeking to achieve as objective an estimate as possible. Other approaches have been developed, however (in the field of Risk Perception), which focus on and explore the subjective factor and show that, beyond the physical characteristics of risk, there are other aspects, criteria, and considerations, not included in the quantifications of the probabilistic models of risk conception, which determine people’s perception of risk and its acceptability. This is the constructivist perspective of risk, which was developed within the psychometric and cultural paradigm based mainly on psychological and sociological research.Footnote 4 Here risks are not considered as ‘fixed, objective entities interpreted the same by everyone,’ but differently ‘depending on their personal values, social and cultural contexts and other economic, legal, and ethical considerations’ (Clahsen et al., 2019, p. 441).

The psychometric paradigm investigated the psychological factors that affect people’s judgment about the seriousness and acceptability of risks (their risk perception); these include experience, values and emotions, and mental models. The socio-culturalist paradigm shows that ‘[r]isk perception’ is not everywhere the same but depends in part on the social and cultural contexts in which people interact and learn, and that if we take into account a broader range of outcomes — economic, social, environmental, ethical-moral, esthetic, etc. — we will see that these are perceived differently by different social groups, depending, e.g., on the group’s social form (hierarchical, egalitarian, individualist, etc.) (see, e.g., Klinke & Renn, 2002; Clahsen et al., 2019). Taken together, these two paradigms show that risk perceptions may differ depending on the type of risk, the risk context, the personality of the individual, and the social context, and that ‘the difference between experts and non-experts is linked to the fact that non-experts refer to broader definitions of the concept of risk, that they focus on different priorities’ (IBC 2021, p. 22).

Research on the constructivist perspective contributed to a more holistic understanding of risk by exposing dimensions beyond severity and probability of harm, aspects ignored by probabilistic models of risk. This research also showed, however, that there may be risk-judgment biases in people’s perception of and responses to risk, which may also be responsible for the layman’s ignorance of or differing estimates from the experts’ risk assessments, with the concomitant danger of unreasonable, of inappropriate decisions and behaviors (which does not of course invalidate the significance of lay persons’ estimations of risks, as noted above). For example, Renn et al., 2022 observe that the complex risks of climate change are often underestimated because they are not immediately perceptible or because they demand major long-term behavioral changes. The implications of risk perception research are that in designing effective risk communication and risk management strategies it is necessary both to take people’s subjective risk perceptions into account and to understand the barriers to public risk perceptions that exist, especially in the case of complex, e.g., systemic, risks.Footnote 5

In sum, risk has been studied from two basic perspectives, the realist perspective of classical risk analysis and decision theory, and the constructivist perspective of psychological and sociological risk research. Each of these gives different weight to the objective or subjective dimension of risk (as well as to the role of experts and non-experts in risk evaluation (IBC 2021). Together these paradigms show that the perception of a risk or danger and the response to it depend on the objective scientific data and the psychological and social context of the individual. Both these dimensions, the objective-scientific and the subjective, are necessary to a fuller understanding of risk and should be included in the conception and treatment of uncertainty and risk in science education, as many authors have argued (Christensen, 2009; Hansen & Hammann, 2017; Schenk et al., 2021). In their estimations and decisions, students should take into account both people’s perceptions and scientific data and experts’ estimations, however in a critical way (i.e., by taking into account the significance but also the limitations of these perspectives). For a critical evaluation of experts’ estimations, teachers and upper secondary and college students need to know how these estimations are made and how they are critically analyzed and valuated; in other words teachers and students need to have a basic knowledge of Risk Analysis and Decision Theory, which we discuss in the next two sections.

3 Uncertainty in Decision Theory: Risk, Uncertainty, and Ignorance

Decision theory developed after the middle of the twentieth century, using contributions from disciplines including mathematics, economics, social and political sciences, and philosophy. It studies norms and strategies for making decisions with the best possible outcome in terms of the aims and values of the decision-maker, i.e., for making rational decisions (see, e.g., Hansson, 2005, 2018; Bradley, 2018).

Decision-making under uncertainty is one of the most important and most challenging aspects of decision theories. In this regard, classical decision theory distinguishes various categories of decision, depending on the degree of uncertainty of the risk-related knowledge we possess: decisions under certainty where we know precisely the outcome of a choice; decisions under risk,Footnote 6 when the possible outcomes and their probabilities of occurrence are known; decisions under uncertainty, when the possible outcomes are known but their probabilities are unknown or partially known; and decisions under ignorance when both the probabilities and all or some of the outcomes are unknown (see, e.g., Hansson, 2005; Bradley, 2018; Rechnitzer, 2022). (As we shall see immediately below, specific decision strategies and rules have been developed for these categories in the context of conventional probabilistic decision theory.)

Classical decision theory usually uses the following basic elements for the representation and further handling of decision problems (see, e.g., Hansson 2005, 2018; Giere, 2001; Bradley, 2018; Kozyreva & Hertwig, 2021): (1) a set of available options (i.e., possible courses of action that may be taken: e.g., to take a bus or to walk to be sure of getting somewhere in time, to use nuclear power or fossil fuels as a source of electricity); (2) a set of possible outcomes that might result from choosing each of the options; and (3) a set of states, or features of the world, which includes the external factors that affect the outcomes of our choice and are connected with the conditions prevailing in the natural and social environment, such as the weather, the quality of others’ work, the current political-economic situation, and so on. Another basic factor for the evaluation and comparison of the possible options is knowledge of the probability of occurrence of each of their possible outcomes, that is, background knowledge about the phenomena that are involved in the options, which will be used in the calculation of these likelihoods.

In order to compare the available options and then choose between them, a value (or expected utility) is assigned to each of them according to the desirability of their outcomes, which depends on our preferences, value criteria, circumstances, etc. These values are most commonly expressed numerically, i.e., with numbers expressing the magnitude of our preference for the outcomes of the options (see Giere 1991, Hansson, 2005; Bradley, 2018). (The expected utility of an option is derived from the combined calculation of the values and probabilities of all the positive and negative outcomes of the option (see, e.g., Hansson 2005; Resnik, 2003).Footnote 7

In the case of decision under risk (when outcomes and their probabilities are known), classical decision theory suggests calculating the values/utilities for all the options and simply selecting the option with the highest (expected) utility (the decision rule of maximizing the expected utility). The expected utility model for making rational choices thus applies unhindered in situations where we can assign probabilities and utilities to various outcomes. More often than not, however, we know little or nothing about calculating the probabilities of outcomes or we may not even be aware of the whole set of possible outcomes, and the question is what decision strategy/rule should one follow in such cases.

For decision under uncertainty (when the outcomes are known but their possibilities are not or are partially/unclearly known), decision theorists recommend adopting not the maximizing expected utility rule but one of the other strategies that have been developed for that case, such as maximin and maximax (see Giere 1991; Resnik, 2003; Hansson, 2005; Bradley, 2018). According to the maximin rule one should look at the worst (the lowest-valued) outcome for each option and choose the one with the highest minimum value, disregarding any great benefits that may be offered by a different alternative (see in Bradley, 2018). Maximin has been criticized as a very pessimistic, risk-averse strategy, because it excludes opportunities for great possible benefits. ‘Maximin makes some sense when we stand very little to gain from an option and have a great deal to lose’ (Resnik, 2003, p. 334). According to the maximax rule, on the other hand, the decision-maker should look at the best possible outcome for each option and choose the one with the maximum value, without considering the possible negative outcomes of that option. Maximax has been criticized as too risky, as reflecting wishful thinking, and thus as difficult to justify as a rational decision principle. Using this method to decide would make sense only if one had much to gain and little to lose (see Giere, 1991; Resnik, 2003). Between these two extremes a number of intermediate strategies for decision-making under uncertainty have been developed, but these too face serious objections and there is no agreement as to which is best (i.e., the most rational) (Resnik, 2003).

Decision-making under ignorance, how to decide when the probabilities of outcomes are entirely unknown and not all the possible consequences of actions are identified, poses the greatest challenge to probabilistic decision theories, a crucial problem especially when the unknown possibilities may have catastrophic consequences. For such cases conventional probabilistic decision theories can provide no clear and sufficient guidance. ‘According to some, there is no single best strategy for decisions under ignorance, since different strategies may be justified under different circumstances and personal attitudes. The rational decision-maker tailors his or her strategies according to the nature of the risk and his or her philosophy of risk-avoidance’ (Resnik, 2003, p 335).

Serious questions have also arisen with regard to the numerical representations of values demanded by traditional probabilistic decision theory and its more recent model, cost–benefit analysis (CBA) (see in Section 4.1): what should the numbers express and what the measurement unit for values/utilities should be, given that the outcomes of the options are usually of different kinds and incompatible with one another (e.g., economic, ecological, legal, and moral consequences). In cost–benefit analysis all values are converted into monetary worth, which is problematic (and counter to our intuitions) because the various outcomes cannot all be reduced to a common denominator (money) or because the outcomes can affect values that cannot even be measured and may be beyond any material value, and moreover the outcomes may be valued differently by different individuals and groups (Giere 1991; Hansson, 2004; Resnik, 2003; Kinouchi, 2018). In other words, probabilistic models like the CBA model reduce the complex, multi-dimensional concept of value to a single, materialist dimension, utility, while personal, cultural, and tacit aspects should also be reckoned in estimating the values of outcomes/options. We shall return to these points in Section 4.1.

In sum, the standard probabilistic decision models have been criticized for failing to cover the case of serious uncertainty, because they presuppose knowledge of numerical probabilities for their calculations, which is rarely possible for complex real-world risks, and more generally for disregarding the multidimensional character of risk and uncertainty and the socio-ethical implications of the related decisions and options. In the following sections we shall examine these points in greater detail, as well as the new orientations and proposals for decision-making in situations of high uncertainty and severity of risks, namely precaution-based and discourse-based decision strategies.

4 Coping with Uncertainty — Risk Management Strategies

4.1 Risk-Based Strategy

As we said in Section 2, risk has been approached from two basic perspectives, the realist and the constructivist, which imply different strategies for managing risk. Three main approaches have been developed and used to manage uncertainty; these are the risk-based or risk-informed strategy (of the conventional decision theory discussed in the preceding section), the precaution-based strategy, and the discourse-based strategy (see, e.g., Klinke & Renn, 2002; Clahsen et al., 2019; Aven and & Flage 2020; Rechnitzer, 2022).

Risk-based strategy essentially comprises two main phases: risk assessment, where potential outcomes are identified and quantified on the basis of scientific/empirical data, and risk management, where options for regulating the risks are proposed and selected on the basis of the data from the preceding risk assessment (see Resnik, 2003; Rechnitzer, 2022). As stated in Section 3, in the classical decision-theory approach the rule for choosing between options is to choose the one with the maximum expected utility. The quantitative arithmetical calculation of expected utilities does, of course, facilitate comparison and choice among the options, but is only feasible if there is a common unit of measurement for the values of all the outcomes of the options — which, however, are usually of different kinds. Cost–benefit analysis, the most widely used tool in this framework, uses money as a common measurement unit, converting all outcomes (benefits and costs/risks) into monetary terms, regardless of their kind and compatibility, that is, whether the consequences are economic, health-related, environmental, esthetic or ethical.

This standard quantitative approach to risk and its risk-management rule based on maximum expected utility has been criticized as conceptually and ethically problematic. The main conceptual problem noted is that the probabilistic knowledge required for this approach limits its effectiveness to cases of quantifiable risk, with the result that it tends to omit risk estimations for non-quantifiable or hardly quantifiable risks, or risks with ‘low, but significant probabilities for catastrophe’, and that it gives more weight to ‘current costs than [on] future benefits.’ In other words, it tends to neglect uncertainty, ‘leading to irrational and harmful decisions’ (Rechnitzer, 2022, p.81).

The criticisms raised from the ethical standpoint highlight moral issues that arise with the methods of calculating risk in the risk-based management phase. These relate primarily to the quantification of values and the use of a common unit of measurement for comparing and trading-off incommensurable values, especially the monetary unit used by cost–benefit analysis; this creates a serious credibility problem for the estimation of risk and utility, given that — as Kinouchi, 2018, for example, argues — not all values can be quantified, since for some people some values are beyond any monetary or other material price. The second objection is that these methods ignore questions of social justice: a global aggregation of costs and benefits which does not take into account how and to whom these are distributed, as in the CBA model, may not result in a fair distribution.Footnote 8 (Another ethical question, which is little discussed in the literature on risk assessment and management and ought to be taken into account, is whether the assumption of a risk is voluntary or imposed by others (see, e.g., Hansson, 2018)).

Another issue connected with risk management is the claimed objectivity of risk-based management. Since the risk-based management approach draws on empirical data and expert estimates, it is often regarded as objective and scientific, although many analysts have pointed out the limits to this objectivity, especially in the risk management phase, and highlighted and investigated the role of other socio-political and moral factors in the decision-making process. Resnik (2003), for example, notes that ‘[r]isk management is not a purely objective endeavour, however, because it employs normative assumptions about the types of harms we should be concerned about, the level of risk that is acceptable, as well as the distribution of benefits and harms. …To answer these sorts of questions in risk management, we must appeal to social, political and moral values (…)’ (p. 333). Rechnitzer (2022) also clarifies that ‘[w]hile the risk assessment phase should be as objective and value-free as possible, the decisions that take place in the risk management phase should be, although informed by science, based on the values and interests of the parties involved.’ (p. 79).

Resnik (2003) distinguishes two kinds of problems in the application of the risk-based management model: the scientific uncertainty that is due to the lack or inconclusiveness of empirical evidence needed for the estimation of risks, and which makes the involvement of values a necessary element in making risk-related decisions, and the axiological uncertainty that arises from the diversity of opinions and preferences in the valuation and prioritization of the outcomes in consequence of the different values expressed in pluralistic societies. Disagreements arising from scientific uncertainty as regards, e.g., the assessment of the severity and acceptability of a risk could be overcome through the development of additional relevant knowledge, if of course this is possible at the time when the decision must be taken. Controversies due to axiological uncertainty are more resistant, and in democratic societies handling them is a political matter which is addressed through the familiar, more or less advanced, tools for solving such problems that democratic systems have developed (ballots, consultations, legal arrangements, modes of representation, etc.). Both these kinds of uncertainty are needed for a more integral understanding of the risk-related problems and controversies that appear in the real conditions of science and society. For science education, this means on the one hand that uncertainty as an aspect of scientific knowledge must be included in the teaching of NOS, and on the other that students should be trained in the appropriate reasoning, consensus-seeking, and discourse skills needed for dealing with the diversity of views and interests in pluralistic societies (see in Sections 4.3 and 6).

The conceptual and ethical problems (deficits and limitations) of conventional decision theories and cost/benefit analysis, coupled with potentially catastrophic consequences, for example of climate change, created the necessity for and led to the development and adoption of alternative approaches and strategies for making decisions under conditions of uncertainty and ignorance, and specifically to the precaution-based and discourse-based strategies that are the subject of the following section.

4.2 Precaution-Based Strategy

As we have just seen, probabilistic decision-making models are applicable to and effective for decision-making under decision-theoretic risk (i.e., when the outcomes of actions and the probability of their occurrence are known), but reveal their limitations in decision contexts of serious uncertainty and ignorance (where we do not know the probabilities of the possible outcomes and where there may exist unknown outcomes). In addition, as discussed in Section 4.1, the probabilistic models also raise ethical questions, such as, e.g., the monetary value of life in CBA and the disregarding of questions of justice for human societies and the environment. Precautionary thinking and strategies are proposed as an alternative approach in such circumstances: that is, cases involving serious risks but where the scientific evidence is insufficient or inconclusive, such as the release of transgenic plants, or specific applications of genetic engineering (see Hopster, 2021). In sum, the precautionary approach is recommended for the management of risks that are characterized by a relatively high degree of uncertainty but associated with very harmful outcomes (see, e.g., Klinke & Renn, 2002; Aven, 2020).

At the core of precaution-based strategies is the precaution principle, whose basic idea is that anticipatory action should be taken in the face of possible dangerous effects for health and the environment (e.g., from the use or operation of specific substances and systems), even if these consequences are scientifically not fully proved (see, e.g., in Van Dyke 2004; Peterson, 2006; Som et al., 2009; Rechnitzer, 2022). The precaution principle is also regarded as an ethical principle because its target issues include moral concerns for the environment and sustainable development and the rights of future generations (see IBC 2021; Rechnitzer, 2022).

The precaution principle has its origin in the German ‘Vorsorgeprinzip,’ conceived as a leading decision principle for the legislative treatment of environmental issues in that country, and has since increasingly found its way into different national and international agreements on environmental matters and gradually for questions of nutrition, health and security and rights issues (see, e.g., Tickner et al., 1999; Peterson, 2006; Som et al., 2009).

There are multiple definitions, interpretations, and ways of implementing the precaution principle in different countries and fields of activity, depending on the different degrees of uncertainty and gravity of the risks and on the context in which they appear and are dealt with. All of them, however, follow one of two basic lines: the so-called weak version, bench-marked by the formulation of the principle in the 1992 Rio Declaration on Environment and Development, which states that ‘[w]here there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation’; or the strong version exemplified by the Wingspread Statement on the Precautionary Principle, which says that ‘[w]hen an activity raises threats of harm to human health or the environment preventive action should be taken', and which moreover lays the burden of proof for the harmlessness or acceptability of risks on the producer (of risky products or systems) rather than on the environmentalists and the decision-makers responsible for risk management and risk policies, as was customarily the case (cited, e.g., in Van Dyke 2004; Peterson, 2006; Som et al., 2009; Rechnitzer, 2022).

Preconditions for efficient application of precaution-based strategies and tools have also been outlined. These include a constant monitoring of the outcomes of the measures implemented, searching for alternative technologies and courses of action with the fewest and least persistent negative side effects, and readiness to change or adapt in the light of new scientific evidence. They also include the creation of institutions and organizations for the co-ordination and responsible promotion of precautionary measures, openness and communication of information relating to the data and reasoning used in the decision-making process, and the participation of the public and the agents involved, especially in the case of high uncertainty and ignorance (see, e.g., Klinke & Renn, 2002; Steel, 2013; Rechnitzer, 2022).

In a long-standing and continuing debate the precaution principle, although it has been widely adopted, has been criticized as unable to offer clear and sufficient guidance and decision rules for handling uncertainty. Among the objections to it are that it has multiple and vague definition and terms, is incoherent and absolutist (highly risk-averse), is ‘unscientific’, and stifles technological growth and innovations. These criticisms of the precaution principle have been successfully addressed by the counter-arguments and clarifications of its proponents (see, e.g., Sandin et al., 2002; Steel, 2013; Rechnitzer, 2022). In this paper we do not present the whole of this interesting debate but focus on two main points of criticism (and the responses to them) which are important for the application of precautionary thinking and more generally for decision-making under uncertainty, and which concern (a) how one can distinguish between credible/realistic and non-credible/illusory risks, so as to avoid overestimating the risks and the responsive measures needed to deal with them, and (b) the viewpoint of precautionary thinking with regard to technological innovation and its risks. In our view, these two topics have important educational implications and these will be examined separately in Section 5.

To summarize, among the basic elements that characterize the precaution-based approach and should be discussed in science education are that, in comparison with risk-based strategies, it takes uncertainty seriously into account rather than avoiding it, that it embraces ethical considerations, and that it urges discursive and participatory practices to cope with high uncertainty and ignorance. These elements make precaution-based thinking advantageous and particularly relevant for dealing with pressing contemporary problems productive of such high uncertainty and severe risks/dangers, such as climate change. Cost–benefit analysis is a clear and useful tool, but more for decision-making in conditions of limited uncertainty: that is, for dealing with ordinary risks, which means reversible risks of low complexity and low catastrophic potential, and for which there exist ample data and experience, such as those handled by insurance companies, e.g., fire, automobile accident, and accidents in transportation or the workplace.

4.3 Discourse-Based Strategies — The Analytic-Deliberative Process

As we have said, a contemporary conception of risk integrates both the realist and the constructivist perspectives. This conception modifies traditional risk management, adding new aspects to ways of coping with uncertainties. Risk assessment and management should take into consideration both the experts’ assessments and the layman’s concerns and interests. ‘For making risk evaluation and management consistent with the best scientific knowledge and the most appropriate social values, we consider it to be justified and necessary that both physical criteria as well as social concerns are integral parts of these evaluations (…)’ (Klinke & Renn, 2002, p. 1076). This is echoed in recommendations in science education literature that the conception of risk take into account both its objective and its subjective dimensions.

A significant change in this integrative approach shifts the emphasis to discursive and participatory risk assessment and management practices, i.e., to the engagement of the agents and the people affected by the consequences of the risk assessment and handling. This provoked a discussion focused on the nature and substance of these discourses and especially of social participation: who, in what cases, and to what end, should have a say in these processes. Carrier (2021), for example, notes that involving social agents, whose appreciations and points of view may be biased, should not mean neglecting scientific facts, although this is in no way an argument against social participation in and the democratization of science. ‘This suggests that RRI [risk-related issue] needs standards or correction mechanisms that go beyond bringing in social agents. Acting responsibly is more than being responsive’ (p. 4751).

In this context Klinke and Renn (2002) initially distinguish three basic elements that are characteristic of risks and risk-related knowledge and which require their corresponding appropriate handling practices: complexity, which concerns the difficulty of identifying causal relationships between adverse effects and their potential causes; uncertainty with regard to the nature or the likelihood of adverse effects; and ambiguity, which relates to differences in value judgments (of both experts and non-experts) regarding, e.g., the severity of the risks/dangers and the measures to be taken for their regulation (see also Clahsen et al., 2019). In the authors’ view, the appropriate risk management strategies are, respectively, the risk-based, the precaution-based, and the discourse-based approach. In addition, regarding all three risk management strategies as requiring participatory and deliberative processes, they describe the type, the substance, and the aim of the discourse and participation and where these are needed for each to be effective.

If the problem is complexity, Klinke and Renn think that risk managers (decision-makers) should base their risk assessment on expert estimations and state-of-the-art knowledge: ‘[i]t does not make much sense to incorporate public concerns, perceptions, or any other social aspects into the function of resolving (cognitive) complexity’ since the sophisticated methods needed for assessing and regulating complex risks ‘can be offered by scientists and experts better than by anybody else’. They argue that the meaningful discourse here is an ‘epistemological discourse’ among scientists and experts, with the objective of providing ‘the most adequate description or explanation of a phenomenon’ and its effects (‘for example, the question of which physical impacts are to be expected by the emission of specific substances’), while the more complex and uncertain the phenomenon, the more necessary a multidisciplinary discourse among experts (p. 1086). The corollary to this view is that the scientific community has a duty to provide policy-makers and stakeholders with trustworthy scientific information.

If there is uncertainty in the risk-related scientific knowledge, then the probability and severity of the risks cannot be calculated solely on the basis of the scientific data and value judgments are necessary to fill the gap. Thus, in the face of scientific uncertainties, ‘[d]ecisions require more than input from risk specialists’; they need to include concerns and assessments of stakeholders and other interested parties. In this case there arise normative disagreements among the participants regarding the assessment of the severity and level of acceptability of a risk and the measures needed to deal with it. Such disagreements, say Klinke and Renn, require a ‘reflective discourse’ that seeks to achieve compromise or consensual decisions and options. The desideratum here is ‘to find the adequate and fair balance between the costs of being overcautious versus the costs of not being cautious enough (…).’ (Klinke & Renn, 2002, p. 1086).

In the case of ambiguity, controversies of a socio-political and ideological nature relating to the valuation of the outcomes of specific actions come into play, i.e., how positive or negative their consequences are in the end result (such as for example the arguments pitting long-term ecological harm against the possibility of sufficient food supply through the use of genetically modified organisms in agriculture). The risk issues in such controversies are associated with broader and more fundamental values, convictions, and interests and the negotiations and value trade-offs are correspondingly more complex and difficult. Such value conflicts are more resistant, and solving them may require the participation of the actors representing them in risk evaluation and management processes. The appropriate kind of discourse required here for resolving value conflicts is, according to Klinke and Renn (2002), a ‘participatory discourse’ ultimately aimed at seeking conflict resolution and joint vision-building. ‘Coping with ambiguity necessarily leads to discursive management tools, i.e., communicative processes that promote rational value disputes (…).’ (, p. 1087). The implication is that, since scientific data about risks can be insufficient, and cannot be the sole basis for appropriate action and fair decisions, a new kind of risk management is required which involves not only scientists but also the interested and affected parties. Science education should prepare students to make their estimations based not only on scientific data and expert advice but also on the opinions, concerns, and expertise of the persons and communities affected.

Klinke and Renn (2002) suggest that, taken together, these kinds of dialogue (epistemological, reflective, and participatory discourse) ‘could be labelled as an analytic-deliberative procedure for risk evaluation and management’ (p. 1092). Clahsen et al. (2019) emphasize the element of social participation and explain that the concerns, needs, and interests of affected parties ‘should continuously be taken into account and merged into an “analytic–deliberative” process of risk assessment and risk management (…). Analytic, since decisions should be informed by rigorous, state-of-the-art science, and deliberative, since interested and affected parties should participate in all phases of the risk assessment process’ (p. 447).

Similarly Aven (2018) thinks that in most cases, the appropriate strategy would be a mixture of these three approaches, and Aven and Flage (2020) explain that the higher the stakes involved and the larger the uncertainties, the more weight should be placed on the second category [the precaution-based strategy], and the greater the interpretative ambiguity [different interpretations of the same data] and normative ambiguity [different views related to the relevant values] the more weight should be placed on category III [the discourse-based strategy]’ (p. 2132).

To summarize, in drawing on contemporary risk-related research, and considering social concerns and fairness issues as important components of risk estimation and management processes, the suggested analytic-deliberative approach is a scientifically documented and socially sensitive approach and we regard it as a proper foundation for the teaching of assessment and decision-making under uncertainty in science education. This approach can significantly contribute to supporting what are considered the basic competences for dealing with complex and unpredictable risks, i.e., the ability to participate in critical and reflective discourse and collective decision-making (e.g., Birdsall, 2022; Elmose & Roth, 2005; Hansen & Hammann, 2017). Following this approach in teaching risk helps students gain a comfortable competence in the above types of discourse and at the same time an awareness of their aims and why they are useful in the resolution of controversies.

In addition, engaging students in such discursive situations relating to real and current risk-related SSIs involves and supports three basic science education goals — acquiring scientific knowledge, understanding NOS, and enhancing students’ reasoning-argumentation abilities — and gives them meaning beyond the classroom. Concretely, engaging students in epistemic discourse (which is cognitive in nature) incites them to use, and to probe more deeply into, the scientific knowledge they acquire (in the classroom or from other sources) about the phenomena involved in the particular risk-related SSI problem and their consequences for, e.g., health or the environment, so they can enrich the arguments they use in their discourse with scientific data (Birdsall, 2022). Engaging students in reflexive discourse (especially with regard to the case of scientific uncertainty) should provide an opportunity for greater awareness that in view of uncertainty or lack of risk-related scientific knowledge there is little sense in basing judgments regarding the severity of the risks and how they should be handled on such imprecise or insufficient knowledge, that such decisions must therefore be in part values-based, which leads to disagreements because values can have different perspectives (e.g., may aim at risk-avoidance or be concerned with promoting political or economic interests), and that the solution to such disagreements requires discourse aimed at balancing trade-offs and compromises. Finally, students’ experience with ‘participatory discourse’ helps them see that even if there is no scientific uncertainty, and therefore no disagreement on that basis, there will still be axiological differences based on deeper and more resistant socio-political interests or ethical values of those concerned and that the object of such participatory discourse should be conflict resolution rooted as far as possible in the shaping of a common approach to the problem, e.g., beginning by finding commonly accepted values such as the protection of life and health. Experience shows that in this case discussion and argumentation are not generally sufficient in themselves to lead to such results, and that in the event the related decisions and policies reflect the outcome of a vote; but the important thing is that in democratic societies the vote is preceded by participatory deliberations. Participatory discourse in the classroom also provides an opportunity for discussion and reflection on the value and the strengthening/improvement of democratic institutions and conventions to which, as future citizens, the students might contribute.

In the context of this approach students should have the opportunity to learn that while scientific knowledge is indispensable for coping with uncertainty and risks, they must at the same time be aware that it is not always enough for decision-making, since in many cases it may contain uncertainties and insufficiencies, and participatory discourse is necessary for coping with the resulting disagreements and for reaching durable, balanced decisions. An ultimate educational benefit of introducing discursive situations into science teaching is that students come to realize that such critical and reflective discourse on controversial issues is more beneficial and efficient than egocentric positions and polarizing debates; in other words, it promotes a more sophisticated citizenry.

5 Reaching Appropriate Decisions Under Uncertainty — Two Topics

5.1 Credible Risks, Risky Activities, and Appropriate Responsive Measures

In this unit we discuss two topics that we see as essential to decision-making in conditions of uncertainty and risk and think ought to be taught. Elaboration of these subjects provides a basis for transferring them to the classroom. The first has to do with criteria for assessing whether theoretical risks constitute a real possibility, the object being avoidance both of overestimating risks, which would result in creating unjustified fears and in excessive precautionary measures, and conversely of underestimating them, which would lead to discounting real threats and taking inadequate protective measures. The second topic has to do with the risks peculiar to contemporary technological innovation and considerations and views on the misuse or prudent use of innovations.

The main questions relating to the first topic are (a) what criteria does one use to distinguish credible from far-fetched risks/threats associated with various activities and how to identify activities that may have catastrophic consequences when there is insufficient scientific evidence upon which to base such judgments, and (b) how broad and how rigorous should any precautionary risk-reduction measures be.

Regarding the first question, Resnik (2003) offers guidelines and specific criteria for assessing the credibility of hypotheses about the existence and causes of specific risks. He argues that if we cannot, for want of data, know whether a hypothesis is certain, we can nonetheless — and this is enough — judge whether it is plausible. ‘A plausible hypothesis is one that we believe to be at least possible and worthy of further testing. That is, the hypothesis is not just logically possible but it is a serious possibility, given our corpus of knowledge (…)’ (p. 337). To this end he recommends applying the epistemic criteria that scientists use for the acceptance of hypotheses and theories and also for the initial choice of a plausible hypothesis as a basis for designing their study of the problem they are investigating and which they will test experimentally at a later stage (comparing their predictions with the empirical data/phenomena). Such epistemic criteria include the coherence, the explanatory and predictive power, and the simplicity and precision of the hypothesis/theory. The coherence of a hypothesis relates to its internal consistency and its consistency with well-established knowledge; the explanatory power of the hypothesis concerns its ability to explain phenomena (properties and behaviors of natural systems); predictive power relates to the compatibility of its predictions with events/phenomena that have already been observed (and/or with the prediction of unknown phenomena, which paves the way for new research); simplicity means that the structure of the hypothesis includes simple concepts and mechanisms; and precision means that the hypothesis does not suffer from extensive ambiguities in its concepts and conceptions. These criteria are of practical value, although they do not guarantee the emergence of truths and they are problematical in their implementation, since, e.g., they are not used always and everywhere in the same manner or with the same weighting and they may contradict each other (see Kuhn, 1970, 1977; Longino, 1990, Develaki 2022). According to Resniκ (2003), ‘one must weigh and compare these various criteria in the context of the decision at hand (…). For example, while hypothesis H is simpler than hypothesis J, scientists may prefer J to H because J has more explanatory power and precision than H.’ (p. 339).

In our context, this approach means to examine whether a hypothesis about the existence and causes of risks fulfils such criteria and whether this hypothesis involves scientifically grounded causal mechanisms that support its positions (Resnik, 2003). Also other authors note the importance of the existence of a known causal mechanism grounded in scientific knowledge which serves as a basis for explaining the hypothesis/position that some activity can lead to a specific (catastrophic) outcome. For example, in physics there is the mechanism for the development of the greenhouse phenomenon, which explains the correlation between greenhouse gas (GHG) emissions and rising global temperatures, thus strengthening the hypothesis of its anthropogenic causation (see, e.g., Steel, 2013; Hopster, 2021). In the same spirit Hopster (2021) notes that for assessing the plausibility of hypotheses in the case of climate uncertainty, ‘[s]everal second-order epistemic considerations for identifying realistic possibilities’ are used, such as ‘whether there is a solid mechanistic understanding of the system under consideration (…), whether there are independent lines of evidence supporting an envisioned outcome (…), the level and quality of expert consensus (…)’ (p. 15).

Students can use examples from controversial socio-scientific issues to practice applying these criteria to hypotheses or positions formulated in debates which argue, e.g., that a given outcome of substantial harm from a specific activity does or does not constitute a real possibility. Following Resnik’s idea, if we take as our example the view that global warming is essentially anthropogenic and despite anything to the contrary consider that the evidence for the power of the hypothesis is ambiguous or insufficient, we can check the plausibility of that hypothesis by examining which of the above epistemic criteria it meets. The hypothesis has explanatory potential; explains the data we have for the significant rise in temperature, especially in the last century of intense industrialization compared to earlier ages; and is logically consistent with our background knowledge. It is also reinforced by an existing scientifically grounded mechanism (creation of the greenhouse phenomenon), but lacks precision in some specific aspects, e.g., the rate and magnitude of the rise in temperature or the rate and timing of the melting of land-ice. According to Resnik, ‘[t]o assess the plausibility of the global warming hypothesis, scientists must weigh and consider these and other factors. Given the definition of plausibility used above, the hypothesis is a serious possibility and is worthy of further testing’ (p. 340).

In the same way students can test the plausibility of the hypothesis regarding the role of anthropogenic factors in the appearance of zoonoses (such as the global outbreak of Coronavirus in 2020) or the reduction of biodiversity in the context of sustainability. In this context students can consider questions relating to contemporary lifestyles and their effects on animals and the environment, such as, e.g., factory farming, the massive use of antibiotics, deforestation, urbanization, and the misuse of biotechnologies. More generally, they can examine the ethical dimension of these topics and the possibilities for the prevention of such hazardous situations rather than merely dealing with them afterwards with measures that may well prove to have equally harmful consequences.

Regarding the identification of risky activities, the literature proposes some general features that mark the activities in question as potentially hazardous or likely to have harmful or catastrophic consequences and should therefore be avoided or replaced. Hansson (2005) (p. 66–67), for example, proposes as one such indicator whether the effects of the consequences are limited in time and place. If they are not, this increases the uncertainty and the severity of the possible catastrophic consequences of those activities: for example the widespread implementation of untested new technological innovations or uncontrolled interventions (e.g., ‘global emissions and the spread of chemically stable pesticides’) that disturb the balance of complex systems (e.g., ecological, atmospheric, economic) in a possibly irreparable or irreversible way. Klinke and Renn (2002) recommend similar criteria for judging the severity of potential damage with unknown or contested probability: ‘In this dilemma, we advise risk managers to use additional criteria, such as “ubiquity,” “irreversibility,” and “pervasiveness over time,” as proxies for judging severity’ (p. 1091).

As we have said, the second question focuses on when precautionary measures should be taken in response to possible risks, and how far-reaching these should be, so as to avoid an over- or underestimation of risk and response. Overestimation and underestimation of risk is a matter of serious concern to research. The question here is at what point does an alleged risk/threat become so serious that despite its scientific uncertainty it tips the scale in favor of the taking of precautionary measures. Steel (2013, p. 7) cites three conditions that determine when and how the precautionary approach should be adopted: the damage condition (the risks must be above a certain threshold of damage), the knowledge condition (there must be sound scientific indications that the outcome is non-negligible, or that there are good epistemic grounds to take the threat seriously), and a suggested remedy (a recommendation of precautionary measures that should be taken in order to avoid or lessen this risk). He also makes it plain that precautionary measures must be governed by the principle of proportionality, which means that they must be efficient and consistent. Efficiency means that precautionary measures must effectively minimize a target threat but must not be excessive; they must be calibrated to the probability and severity of the threat. Consistency means that the measures must not have effects (economic, environmental, health, moral) that are worse than the risks/damage they were meant to prevent (e.g., a remedy must not have side effects more harmful than the disease it was meant to treat). Following these rules implies searching for alternative precautionary steps and choosing those with minimal negative side effects, which may indeed even offer side benefits: for example, using sustainable energy resources to generate electricity has general health and ecological benefits apart from helping ward off global warming.

The proportionality rule aims mainly at ensuring a balanced reaction to possible dangers in the context of uncertainty, and specifically to guard against over- or underestimating the possible risks. The danger of taking excessive measures against uncertain or negligible risks has been used as an argument against, for example, environmental measures (e.g., carbon taxes or cap-and-trade schemes) for the reduction of greenhouse gas (GHG) emissions, on the grounds that the cost of these would lead to the cancelation of or cutbacks in investments for other more certain and immediate risks or humanitarian programs. Steel (2013) counters these arguments on the one hand with data showing that Sweden, which implemented a carbon tax in 1991, suffered no such negative economic effect, and on the other with the observation that there had in fact been no such investments in humanitarian programs before environmental taxes were introduced and that environmental taxes could well be used to finance such programs if any were proposed.

Sandin et al. (2002) make the additional point that weighing the outcomes (costs and benefits) of a decision depends on where one places the horizon: ‘If the horizon is too narrow, then decisions will be recommended that are suboptimal in a wider perspective … If we apply expected utility maximization to, for instance, crop protection, seen as an isolated issue, then the decision with respect to pesticides may very well be different from what it would have been if we had applied the same decision rule to a more widely defined decision problem in which effects on nutrition and health are included’ (p. 293) (see also Hansson, 2005).

5.2 Technological Innovations and Associated Risks

The second topic we think it important to introduce for discussion in science education is that of contemporary technological innovations and their risks, such as genetic engineering, nanotechnology, biotechnology and digitalization, and the uncertainty or ignorance relating to the consequences of their application (see, e.g., Som et al., 2009). The main points for classroom discussion here are first of all the particular features of the risks associated with novel technologies and the difficulty of predicting and handling them, and secondly the approaches and proposals that have been developed in relation to these problems, which suggest a re-orientation in the valuation and use of innovations, placing them in a framework of more general values and lifestyles.

Novel technologies can help deal with various problems and improve the quality of life, but they also create new risks, different in kind and extent, that were unknown in earlier ages of more moderate interventions and softer technologies ‘For example, a nuclear power plant accident in one geographic location can have global consequences and can affect the wellbeing of actual as well as future generations’ (IBC 2021, p. 21).

Handling the risks of novel technologies is difficult with the usual methods of risk assessment and management, first of all because these are new and unknown risks for which we have no previous experience or data (Kinouchi, 2018). Predicting their effects on the environment and their consequences for societies is difficult for various reasons. First of all, the ultimate impact on society and the environment, the benefits or harms from the application of a piece of scientific research, are usually unclear due to the ambivalent nature of research (co-existence of risks and opportunities), while the findings of a piece of research carried out for a good purpose can be used for ill and vice versa (see Develaki 2008). More generally, where the course of a research project in science or technology will ultimately lead, whether in the long term the balance of its practical effects will be positive or negative, is as a rule not predictable beforehand (see Carrier, 2021). Moreover, the increasing power of the novel technologies, in combination with the increasing pace of research and of production of technological products, often outstrips the possibility of timely prediction of the effects of their application (see, e.g., in IBC 2021; Som et al., 2009).

Given the major place and influence of the new technologies in and on our life and the environment, and in view of the uncertainties as to their consequences, which may be unacceptable and irreversible, a thoughtful approach to their assessment and use is of vital importance, especially for future generations. Students should be aware of and reflect on the different opinions and approaches that have been formulated regarding the relationship between new technologies and quality of life and the environment and about dealing with the technological risks.

For example, one strategy that is proposed for choosing between alternative technological solutions is the sequential application of alternative options alongside the on-going research into the currently applied option and the possible alternatives. The knowledge and experience gained from the implementation of an earlier option will be used in the designing and application of subsequent safer, improved options. Based on the controversy in the 1960s over the use of nuclear energy for generating electricity, Giere (1991) argues that ‘This approach [the sequential strategy] requires the formulation of options that can be carried out sequentially in such a way that information gained at earlier stages can be used to improve decisions at later stages’, and that ‘Any sequential option must include research that can contribute to the formulation of new options—for example, options involving renewable resources, such as solar energy, and decentralized generation’ (p. 200).

Along the same lines, Klinke and Renn (2002) point out that a precautionary approach to the uncertainty of new technologies ‘would imply a gradual, step-by-step diffusion of risky activities or technologies until more knowledge and experience is accumulated’ (p. 1075). Carrier (2021) regards small-scale novel technological advances based on solid knowledge and acquired experience, such as technologies that improve existing tested technologies, as an instance of responsible research and innovation, and more generally regards as responsible research that which thoughtfully considers possible negative consequences of new technologies before designing and building them.

Approaches to the evaluation of technological innovations and their consequences and constructive proposals rooted in a more radical, more holistic perspective have also been formulated. These challenge the paradigm that considers the negative effects of technological applications justified in the name of ‘technological progress’ and propose a more informed perception and stance. Kinouchi (2018), for example, argues for a shift from ‘a materialist worldview deeply informed by the values of technological progress’, which excludes values related to social well-being and environmental safety (Lacey, 2009), to a different approach which brings to the fore other values that people think inestimable and non-negotiable, e.g., social and environmental values (p. 238). Tickner et al. (1999)) agree that thoughtful decisions about new activities imply that a redefinition of development is needed, permitting it ‘not only to include economic well-being but also ecological well-being, freedom from disease and other hazards’ (p. 19). IBC (2021) refers to the so-called Slow Science movement (:http://slow-science.org): ‘As promoted by the scientific movement Slow Science (Slow Science Academy, 2010), scientists must, beyond the laboratory or the computer, take the time to reflect on the major questions posed by relentless scientific progress.… The aim is not to give up scientific progress, but to carry it out with thoughtful reflection about its consequences.’ (p. 24).

The above approaches, stances, and decisions regarding technological innovations and other similar topics are influenced by general underlying worldviews, i.e., underlying values, tacit beliefs, and mental models (cognitive schemes) that are used to understand and make sense of the world (see, e.g., Fitzpatrick, 2023; Garthwaite et al., 2023). For example, Fitzpatrick (2023) examined the role of diverse world views (typically characterized as traditional, modern, post-modern, technocentric, or ecological) in the conceptualization and interpretation of sustainability, and how views and behaviors could potentially be transformed towards a liveable future for humans and the environment. Similarly, Garthwaite et al. (2023) found that four concrete ‘cultural types’ (culture-depended worldviews about human-nature interaction) determine people’s views about how (environmental) risks are perceived and should be managed. Historians and philosophers of science have argued that also scientists, especially in the phase of the initial formulation of principles, hypotheses, and theories, are often guided by personal convictions about the structure of the world (symmetry/chaos, continuity/discontinuity, etc.) and specific worldviews in the tradition of their scientific field/community (Holton, 1981; Kuhn, 1970, 1996), e.g., mechanistic-deterministic in classical mechanics or a quantum mechanical-stochastic.

6 Connections to and Implications for Science Education

6.1 Connections to and Implications for SSI- and Argumentation-Based Teaching, and Scientific Literacy

Argumentation-oriented (see, e.g., Erduran & Jiménez-Aleixandre 2007), SSI-based (see, e.g., Zeidler et al., 2019), and NOS-oriented (see, e.g., Flick & Lederman 2006; Irzik & Nola, 2011) courses offer a favorable environment for teaching risk topics like those mentioned above and elsewhere in this article, while conversely teaching risk issues reinforces and actualizes the aims and content of those approaches.

Socio-scientific issues are considered the most appropriate terrain for the incorporation of risk content, because most of them are risk-related and controversial (Bencze et al., 2020; Birdsall, 2022; Christensen, 2009; Hansen & Hammann, 2017; Schenk et al., 2021; Sjöström et al., 2017), thus providing ample opportunity and context for teaching and learning about risk, about the disagreements resulting from scientific and axiological uncertainties, and for learning and comparing risk management strategies. Research on SSI-based teaching focused initially on its potential for supporting students’ learning of scientific content and of the nature of science and, later, for enhancing students’ argumentation, decision-making, and perspective-taking abilities, which are needed especially in the conditions of complexity, controversies, and ethical dilemma characteristic of SSIs (see, e.g., Zeidler & Sadler, 2008; Zeidler et al., 2019; Bencze et al., 2020; Kahn & Zeidler, 2019). Such abilities are obviously related to those needed for the assessment of risks and for reaching decisions concerning them.

A core feature of SSI-based instruction is the conception of teaching and learning science in its broader social, cultural, and ethical context. This is also the perspective of the science, technology, and society (STS) education and of other similarly oriented current approaches and projects in science education (see in Bencze et al., 2020; Valladares, 2022; Birdsall, 2022; Laherto et al., 2023). These approaches integrate, more or less explicitly, the risk issue in their scope, thus providing arguments and context for teaching on risk topics such as those elaborated in the present article. Equally, since the article focuses on contemporary societal risks and the collective decisions and behaviors required for tackling them, it strengthens the argument for a social orientation in science education. It also reinforces and actualizes the aims and content of those approaches. For example, knowledge about risk helps with the elaboration and analysis of the risk dimension of SSIs, which is usually understated and not analyzed when these are taught (see Genel & Topcu 2016; Schenk et al., 2021).

The topics treated in this article are also connected to and reinforce and enrich the field of research into student argumentation and reasoning. Having students discuss the nature of uncertainty and how to deal with it strengthens and expands their reasoning and decision-making abilities in relation to conditions characteristic of the severe global risks of our age.

Argumentation-based teaching, like the SSI-based teaching, is based on the students’ discussions and deliberations, giving them thus an opportunity to practice the kinds of discourse discussed in Section 4.3, while the arguments (their structure and components) needed for conducting these kinds of discourse have been described and documented in the science education argumentation research and should reflect scientific reasoning (see, e.g., Erduran et al., 2004; Erduran & Jiménez-Aleixandre, 2007; Giere, 2001; Develaki 2017, 2019). In this research, however, there is, with few exceptions (see, e.g., Develaki 2017, 2022), no sufficient focus specific to argumentation and reasoning in conditions of uncertainty and ignorance (Christensen, 2009; Schenk et al., 2019); what is needed here, in other words, is a more comprehensive approach to the subject of risk, including topics such as the nature of today’s technological innovations (their unpredictability and ambivalence), the role of scientific and axiological uncertainties and of values in the assessment and management of risk, criteria for judging the plausibility of alleged risks and responsive measures, and appropriate epistemologies and strategies for dealing with such risks, with emphasis on the precaution- and discourse-based approaches. Argumentation and SSI research should focus more on risk-related argumentation and decision-making in the context of the SSIs that are associated with the contemporary technological innovations and the risks of their application. This article elaborates such topics and offers thus important background for equipping students with the knowledge and reasoning and discursive practices that they will need for participating in and contributing to the handling of risk-related decision problems.

As we have said, democratic societies need citizens capable of making informed assessments and decisions about controversial and risk-related SSIs, which means, among other things, citizens who are scientifically literate, since science knowledge has an important place in these processes. Risk competence is thus an essential aspect of scientific literacy (Hansen & Hammann, 2017). Scientific literacy is the term used to cover the several goals of science education (in very rough outline, the provision of scientific and NOS knowledge and the development of skills). There are various approaches to the content and ultimate goals of scientific literacy. Risk understanding and management has a place in all these conceptions: those that emphasize the learning of scientific knowledge and the methods of its generation, because scientists face risk in their acceptance of choice of hypotheses and postulations; those that focus on science learning in relation to its relevance to every-day life and the social dimension and impacts of science, because the creation of risks and uncertainties associated with scientific-technological applications has an impact on society; and those that support inclusion of a socio-political dimension and action-taking in scientific literacy, because dealing with the crucial risk problems of the age implies transformations/changes of worldviews and behaviors. (These conceptions of the ultimate goal of scientific literacy are classified as Version I, Version II (see Roberts, 2007), and Version III, an expanded Version II; see Sjöström et al., 2017; Birdsall, 2022).Footnote 9

6.2 Connections to and Implications for Conceptions of NOS

Understanding and handling uncertainty and risk are also of direct relevance to and have implications for NOS conceptions and NOS instruction in science education.

Scientific data are a very important guide for deciding under uncertainty on the level of the individual and society, but not the sole one. As we said in Section 4.1, socio-political, cultural, and moral aspects and values also play a critical role, not only because the data relating to the risk problem may be insufficient or uncertain but also because taking into account the concerns and diverse views of the people affected is necessary for effective and just risk management. Scientific uncertainty and limitations may be due to the complexity of the phenomena implicated in the decision problem, to the fact that the related research is still in its early stages, but also to value-laden factors inherent in the processes of producing and evaluating scientific knowledge, factors connected for example with the abstractions and acceptances of experimental methodology, with modeling, with statistical controls, and more generally with inductive inferences and generalizations from a limited number of empirical cases (see Develaki 2020, 2022). An understanding of the nature and causes of the possible uncertainties of scientific knowledge, the resulting controversies, and the role of values in decision-making is fundamental to understanding and handling decision problems in risk-related and controversial issues. Consequently, teaching and understanding the issue of risk demonstrates the necessity for NOS views informed by awareness of the possible uncertainties and limitations of scientific knowledge. This is in opposition to the frequently prevailing epistemological view in school science teaching regarding the certainty of scientific knowledge, which fails to explain to students that the scientific knowledge taught in the classroom and presented in their schoolbooks is idealized, that it is valid with certainty in the idealized abstract conditions that are necessary for scientific modeling but do not correspond to all real-world conditions (see, e.g., Develaki 2020); and that what is taught is well-established and sufficient for classroom purposes but not for real-world problems and phenomena for which the necessary research may still be in its infancy and the available data may be still imperfectly substantiated or doubtful.

It is clear that the subject of scientific uncertainty is connected to the question of the reliability of scientific knowledge and of trustworthiness in science, a topic that needs to be handled carefully in classroom teaching and that we have discussed in another article (Develaki 2022). Student awareness of the existence of uncertainties and inabilities is important for mature attitudes and informed reasoning and decision-making in conditions of uncertainty, but should not lead to an undermining of the value and trustworthiness of science, which is justified by many facts and arguments. In that article (Develaki 2022) we proposed a conditional, informed trust in science, analyzing on the one hand the arguments that challenge the objectivity and reliability of scientific knowledge, and on the other the reasons that ensure them, which include the sophisticated research methods and the mechanisms for identifying and correcting errors that science has developed (e.g., peer review and the reproducibility test), the readiness of scientists to revise their positions in the face of new evidence, and the social character of scientific practice (see Longino, 1990), which enables critical and transformative interactions that can filter out views and findings affected by subjective preferences, interests, and social values or ideologies, provided that scientific communities work appropriately. Valladares (2022) also notes the need to cultivate trust in science, especially in this post-truth era, which is characterized by skepticism or even denial of scientific knowledge and is favored and promoted by various means and mechanisms of a political, economic, and psychological nature (see, e.g., Höttecke & Allchin, 2020). She argues that, besides providing students with scientific knowledge, epistemologies, and skills, a social epistemology that should draw on all the studies of science and technology (e.g., historical, philosophical, sociological, political, economic, and environmental) is required for dealing effectively with the post-truth problem and for ‘unmasking for whom it is important that we decry science’ (p. 1311) (see also Oreskes, 2019).

6.3 Some Suggestions for Teaching Risk Topics

The topics and analyses presented in this article are best suited to the risk education of science teachers and of upper high school and college students, although certain subjects may be adapted for teaching to lower level target groups, so that students may begin learning about basic risk issues early, and especially about ways of dealing with them.

(This article’s analyses are intended first of all to contribute to the research towards the development of a theoretical foundation of the subject of risk in science education. Some elements of the background used for these analyses, for example concerning the more specific concepts and terms of Risk Analysis and Decision Theory, are therefore intended for researchers and educators rather than students.)

Uncertainty is an uneasy condition for students, as it is for all of us (Giere, 2001). Science education has to help them understand and cope with uncertainty and risk, equipping them with the knowledge and means, from the basic to more sophisticated and reflective, as they progress from lower to higher levels of education (see also Hansen & Ηammann 2017; Schenk et al., 2021).

For example, in the lower secondary levels, students can simply realize, on the basis of related texts/publications and public debates, that scientific data relating to the phenomena and the consequences of activities involved in a socioscientific issue that they examine may be insufficient or uncertain and result in disagreements as to the acceptability of risks and the ways for coping with them, and that also there are axiological differences that further intensify the disagreements and hamper decision-makings. They should also be informed and discuss about the basic characteristics and goals of precaution- and discourse-based risk management strategies developed for dealing with such disagreements (such as taking anticipatory action even if the occurrence and severity of risks is not scientifically certain, searching and choosing alternatives by taking into account also the risks and the ethical dimension of these alternatives, and adopting discursive and participatory practices). Students can experience the existence of such uncertainties and differences and the resulting difficulties in solving risk problems, if they are asked to express their opinions as regards the estimation of the severity of concrete risks and their management and encouraged then to proceed to consensus seeking democratic discourse, with the aim to understand that, despite its difficulties, such discourse can be the more efficient way for a consensual resolution of risk-related problems and controversies and for achieving balanced and just decisions.

In the upper secondary and college levels, students have learned more scientific knowledge and practices used for its generation (modeling, testing and validation methods, scientific reasoning) and are possibly oriented to, or have already entered into the field of scientific activity, where the development of a scientifically responsible and socially sensitive risk competence is very important.

Risk assessment and management at these levels can and should be treated in a more theoretical, specialized, and reflective way, for which the analyses of risk topics given in the previous sections of this article can be helpful. Such a treatment requires knowledge about the nature and causes of scientific and axiological uncertainties and the resulting disagreements as regards the estimation and management of risks, about the sociocultural factors that, besides scientific data, influence people’s risk perceptions, and about the strategies and practices developed for handling such disagreements and for decision-making in conditions of uncertainty and ignorance. Students should acquire knowledge about the risk estimation methods, decision rules, and the general reasoning of the basic risk management strategies (the probabilistic/CBA, the precaution- and the discourse-based strategies) and be familiarized with practices for dealing with risk problems on a social level, such as discourse and participation practices.

More important is the comparison of the risk management strategies, regarding which kind of risk each of them can effectively and meaningfully tackle and to what degree they consider ethical issues and promote discursive and participatory processes that are conducive to responsible and just decisions and responses. This comparison should reveal the limitations of the standardly used probabilistic models and the importance and efficiency of precaution-based and discourse-based strategies especially for coping with the severe uncertainties and ignorance of the contemporary and future global risks. Students here should also be aware of and reflect on critical approaches (discussed in Section 5.2) as regards the valuation of and use of technological innovations and the ‘technological progress’,  more generally, and relate these issues with stances and ways of life of contemporary societies.

(These topics and their interconnections with and implications for science education approaches and goals (with the SSI-, argumentation-, and NOS-based approaches) are elaborated in Sections 4, 5, and 6, and are summarized in Section 7.)

Teaching on risk topics discussed in this paper is favored by educational approaches like those mentioned above and by teaching methods and models based on learning through argumentative discourse, participatory experience, and reflection, such as problem-based teaching, interdisciplinary projects/units, and role playing, which are based mainly on team teaching. However, aspects of risk can be incorporated in any teaching approach and course, linked to course content connected with current risk-related decision problems and public debates, while a more complete treatment and reflection on risk can be done by a project designed specifically for this purpose.

Moreover, recent didactic proposals favoring risk instruction include the recommendation for SSI teaching in informal and place-based contexts. The thinking here is that students’ experiencing of a specific SSI in its real-world context, that is, with the participation of the stakeholders affected by the SSI, can help them recognize, appreciate, and reflect on the plurality of perspectives and arguments of those involved, and possibly to move ‘to more sophisticated ways of conceptualizing and resolving SSI’ (Zeidler et al.  2019, p. 6). Another is the proposal for a reflective and transformative learning, particularly pertinent to the tackling of questions like climate change, biodiversity, and other such issues in the context of sustainability, because it encourages critical reflection on one’s own and others’ views, perspectives, concerns, and aims, which might lead to transformations of views and solutions to a problem (see Sjöström et al., 2017; Fitzpatrick, 2023).

One crucial factor and precondition for addressing risk topics and issues in the classroom is of course the underlying knowledge base of the teachers, which means that they need to be equipped to handle the scientific, social, and ethical aspects involved in the assessment of and decision-making on risk, and supplied with appropriate teaching material. This article contains topics and information that could be incorporated into teaching units on SSIs, on students’ argumentation, and on environmental education.

7 Summary, Conclusions, and Implications

Assessments and decisions on the complex, crucial, and global risks of the present age take place in a climate of uncertainty and axiological conflict. Science education needs to help students tackle the challenge of decision-making in these conditions. As noted in the introduction, science education literature has long argued the importance of introducing the issue of risk and developing risk competence in science education. In this article we examined the three basic components of the risk issue — the definition, assessment, and management of risk — with the aim of extending and filling gaps in the science education literature and practice, especially as regards risk management, a topic insufficiently covered to date.

In Section 2 we gave the standard quantitative definition used in academic risk research, which reflects the realist approach that risk exists independently of the human observer and that technical risk estimates by experts constitute an objective representation. This conception was expanded with the findings of the constructivist approach, which focuses on the subjective aspect of the layperson’s risk perception. Both approaches are important for a fuller understanding of risk and consequently for more efficient risk management, and should be included in related teaching. In order to understand risk-related decision problems and participate constructively in debates about them, students need to be aware that both scientific data and subjective factors play a role in risk perception and that laypeople, in assessing the consequences of potential risk-takings, see aspects that may be absent from the experts’ estimations but their perceptions may also be driven by biases that can lead to mis-estimations and mistaken responses.

In Section 3 we presented the basic concepts and rules for decision-making under uncertainty of classical Decision Theory and the Cost–Benefit Analysis (CBA) model developed in this framework. Depending on the degree of knowledge of the possible outcomes and the probability of their occurrence, classical Decision Theory classifies decisions as ‘decisions under risk’, ‘decisions under uncertainty’, and ‘decisions under ignorance’. It provides specific decision rules for the first two cases, but no specific rules or guidance for the case of ignorance (when both the probabilities of the outcomes and all or some of the outcomes are unknown). Students should know that the probabilistic decision models standardly used in CBA risk management can operate smoothly in cases where the outcomes and their probabilities are known but does not provide sufficient guidance in the case of ignorance, which is precisely the case with the consequences of complex global problems like climate change, the application of biotechnologies, and worldwide socio-economic imbalances.

In Section 4 we described and compared the structure, possibilities, limitations, aims, and applicability of the three basic risk management strategies commonly employed (namely the risk-based strategy based on the probabilistic approach of conventional decision theory, and the precaution-based and discourse-based strategies), emphasizing the need in every case to take the societal consequences into account. We discussed first (in Section 4.1) the criticisms of the risk-based strategy in relation to conceptual problems (e.g., tendency to ignore unquantifiable risks, focus on current costs than on future benefits) and ethical problems (e.g., the monetization of health damage or environmental destruction and the aggregation of costs and benefits without considering how and to whom they are distributed). In Section 4.2 we described the thinking, aims, possibilities, and conditions of application of the precaution-based approach to the handling of uncertainty. The underlying idea here is that when actions may possibly have unknown harmful or catastrophic consequences, anticipatory action should be taken in order to protect health and the environment, even if their occurrence (and cause) is not scientifically certain. Among the basic elements that characterize the precaution-based approach and should be discussed in science education are that, in comparison with risk-based strategies, it takes uncertainty seriously into account, that it embraces ethical considerations, and that it urges discursive and participatory practices to cope with high uncertainty and ignorance.

A significant risk management change in the precaution-based approach thus is that its effective use presupposes discourse involving scientists/experts, as well as the other parties affected by the consequences of the risk assessment and management, which raised the question of the nature of such discourse and especially of social participation. In this regard we discussed (in Section 4.3) the accounts of Klinke & Renn, 2002 and of Clahsen et al., 2019, who suggest that for risk-related decision problems the management strategies and the kind of discourse and participation should depend on the nature of the risk and the risk-related knowledge: complexity, uncertainty, and ambiguity. These authors suggest that complexity requires an epistemic discourse among scientists and experts, because they are best equipped to interpret complex phenomena and their effects. Uncertainty needs a reflective discourse, involving, besides scientists, stakeholders and other interested parties, aimed at achieving compromises and consensual decisions, since risks and the required responsive measure cannot be assessed solely on the basis of the scientific data. In the case of ambiguity, the resulting disagreements are socio-political and ideological, and are more resistant, and resolving them requires a participatory discourse (involving also general public) focusing on resolving ambiguities and value differences. The authors suggest that, together, these kinds of discourse constitute an analytic-deliberative process for handling risk evaluation and management, and in Section 4.3 we explain why we regard this as a proper foundation for the teaching of assessment and decision-making under uncertainty in science education: In view of the scientific uncertainty attached to the complex global risks of our age and the diversity of the related disagreements in our pluralistic societies, dealing with these issues requires the input of citizens trained in the appropriate skills, among them the ability to engage in informed, constructive discourse and collective decision-making processes. The analytic-deliberative approach enables the supporting of exactly these abilities. Discourse and participation practices acquire an increasing importance in the handling of uncertainty and resolving the resulting controversies on a social level, and science education should and can contribute in this direction by familiarizing students with these practices.

In Section 5 we developed two basic topics for coping with uncertainty in decision problems. The first topic has to do with the informed estimation of risks/threats, so as to avoid both overestimation and underestimation of risks and consequently of the corresponding responsive measures. In this context we discussed some features of risky activities and examined proposed criteria for distinguishing between plausible and implausible risks. The educational implication is that students should know that credible risks can be distinguished on the basis of specific criteria (and be trained to do so), so as to avoid both the overestimation of risk that leads to unwarranted fears and disproportionate cost and especially the underestimation of risk that can result in harmful or catastrophic and irreversible situations. The second topic concerns the nature and dangers of technological innovations and the question of what views and positions lead to their abuse or wise use, a topic that goes to the root of the matter and its relationship to the lifestyles of contemporary societies. These approaches are critical of the paradigm in which ‘technological progress’ is a given value that outweighs the negative side effects of technologies and justifies the development and commercialization of any technological systems, and propose replacing it with informed perceptions and stances. Given on the one hand the important place of technology and engineering in contemporary societies and on the other the uncertainty attending the potential negative effects of their applications, students’ reflection on and adoption of thoughtful, informed worldviews and cautious, forward-looking decision-making and practices is of vital importance. Science education can and must play a role in this direction, in the context of its goals for the preparation of critical and heedful future citizens and responsible future researchers/scientists.

In Section 6, we discussed interconnections and implications of the topics treated in the article with educational approaches, teaching methods, and NOS conceptions in science education. SSI-, argumentation-, and NOS-based teaching provide context and arguments for teaching risk, while the topics and analyses presented here reinforce the goals and frameworks of those approaches. Risk is a typical aspect of contemporary SSIs and the related arguments and debates, while risk-related argumentation is a critical ability for decision-making in contemporary risk societies and should also be an aspect of all argumentation-oriented teaching that wants to be in step with societal problems that involve science and technology. On the other hand, NOS conceptions in science education should promote an informed trust in science that recognizes scientific uncertainty and should in parallel provide students with criteria to recognize and deal with uncertain scientific knowledge and the resulting disagreements. In sum, the above educational approaches are directly connected with issues of risk and risk competence and place them in the more general context of the discussion about the social dimension of science and science education, while risk analyses such as those presented here reinforce them by providing conceptual background for their risk-related contents and goals.

In earlier articles we dealt with decision-making under scientific uncertainty on the individual level: how a person can decide whether to trust scientific claims/models/results relating to serious issues in their personal and social life when the supporting evidence is inconclusive or insufficient (Develaki 2020, 2022). In this article we examine decision-making under (scientific) uncertainty on a more collective level, looking at the strategies followed in risk assessment and management in practice. Since this case involves both epistemic disagreements and socio-political differences (value conflicts), we have discussed criteria and strategies for coping with these. Comparison of risk management strategies is a topic that should be discussed in the classroom, to prepare students to assess the value of the precautionary approach and of inclusive participatory discourse for dealing with the serious uncertainty of contemporary social and environmental issues.

Risk competence constitutes an element of scientific literacy. The topics and analyses in this article may provide some useful background for science education researchers, teacher educators, and science teachers who either deal with the subject of uncertainty and risk or would like to integrate it into their work so as to increase the social significance of science education and its further synchronization with contemporary social issues and concerns.