Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Introduction

Over the last few decades, we have witnessed an explosion of risk management practices across a wide range of organizational contexts, such as environment, health, food, crime, media, and traffic (Lupton 1999; Tulloch 1999; Hutter and Power 2005; Hughes et al. 2006; Kemshall 2006; Taylor-Gooby and Zinn 2006; Renn 2008; Reith 2009). All organizations have to relate to risks in their environment. These risks are not only connected with industrial activities, such as harmful substances or technical artifacts. Instead, a growing number of risks concern how actors act upon what they see as risks associated with an organization. Public relations, risk communication, and participatory approaches to risk management have emerged as means to handle diverging interests in society; not least public perceptions could be a source of risk in the sense that these perceptions could pose a threat to the legitimacy and stability of existing ways of managing risk (Power 2007, p. 21). Thus, risk management focuses on how organizations deal not only with the technical calculation of risks, but also with the actors they perceive as possible threats and potential risks to the stability of the organization. Managing such processes is a matter not only of rules for how we should mitigate or accept certain environmental hazards or health risks, but also of rules regarding the process itself, and of activities that target the understanding of risk and deal with public opinion and perceptions concerning it.

This development implies that risk management is no longer limited to a specific sector dealing with certain kinds of risks (such as nuclear power, the chemical industry, and road transport). Risk management has instead become an integral part of managerial language and organizational activities, and all organizations—private companies, governmental agencies, interest organizations, and nongovernmental organizations—have to deal with risks and have made risk management an important rationale for their activities. Not only organizations, but also citizens must include risk thinking when organizing their social world (Tulloch and Lupton 2003; Ho¨ijer et al. 2006). Previous certainties—social forms such as nation state, class, ethnicity, traditional family structures, and gender roles—which people use to map out their future are now eroding and citizens have to navigate their lives without them (Beck 2002, p. 22). There is not only public concern about new technologies, but also about how to organize social life and who and what to trust in an uncertain world. This has led researchers to claim that we today face a grand narrative of risk and risk management at the global level (Power 2007, p. viii). Thus, society has no option but to organize itself in the face of risk (Lidskog et al. 2005). Assessing, managing, and communicating risk has become a veritable industry.

This progression from risks associated with certain industrial activities to risks associated with individual and organizational behavior has led to a strong call for sociological analysis. However, scientific development is not only a reflection of changing societal conditions; it is also a driving force of this change. Social theorists have claimed that we today live in a risk society (Beck 1992), a culture of fear (Furedi 2002), and in a social climate that fosters insecurity, fear, and risk (Giddens 1990; Bauman 2006; Furedi 2008). Citizens and organizations have been provided with a new risk language, causing them to evaluate different phenomena and activities in terms of risk. Thus, there is a dynamic relation between societal development and our understanding of and reflection on this development and society at large.

Within science we have witnessed a development from technical risk analysis—populated by philosophers, statisticians, and economists—to the broader field of risk governance, in which social scientists ponder how actors understand risks as well as how they handle them. Risks are put in specific contexts, which implies a call for social science in general and sociology in particular to develop knowledge on risk. This is the reason why we today can see an explosion of social scientific literature on risk; in particular how to analyze and manage risk.

The contribution of sociology to the field of risk research is mainly that society is differentiated, which means that also cognitions, understandings, and feelings of risk are differentiated. Actors have various cultural belongings and structural positions which make them understand reality differently, and therefore also act differently. Thus, to develop sociological knowledge on risks implies to contextualize risks; they are not the result of a calculation made beyond society, but instead the result of how actors, located in specific social settings, understand and manage certain phenomena. Risk is for sociology always a particular risk situated in a specific context.

Risk is a relative new object of sociological research, and even if this field has grown rapidly over the last three decades, it has not yet fully been institutionalized as a self-evident subfield of sociology (Krimsky and Golding 1992; Zinn 2008, p. 200). In addition, the sociological discipline covers a broad range of perspective and traditions, which is reflected in its research on risks; there are a number of sociological ways to conceptualize, understand, and conduct research on risk. Sociological thought spans from rational choice approaches to cultural theory; it encompasses micro-sociological theories on the construction of self-identities as well as macro-sociological theories on world systems. Thus, it is not an easy task to map out the sociology of risk. We will do so, however, by starting from some central assumptions of sociology, reviewing some of its most well-known approaches to risk, and finally by presenting a few important ongoing discussions and pointing out important areas in need of further research.

The aim of this chapter is to construe a sociology of risk that does not take technical risk analysis as a point of departure, to prevent sociology from being given a too restricted role in researching risk. Instead, we argue that the task of sociology is to contribute to an understanding of the risk field in which risks always are situated in a social context and are necessarily connected to actors’ activities. Thus, sociology opposes any kind of reification of risks, in which risks are lifted out of their social context and dealt with as something uninfluenced by the activities, technologies, and instruments that serve to map them.

The essay comprises four sections, this introduction being the first. The next and second section presents a historical perspective on sociology and risk. It starts by briefly describing what sociology is, followed by how the concept of risk gradually becomes an object of sociological thought. The section ends by presenting and discussing three different sociological perspectives on risk, all providing different contributions to the risk research field. The third section focuses on current strands of sociological risk research. It starts by giving an overview of three different sociological approaches to risk: Mary Douglas’s cultural theory of social order, Ulrich Beck’s theory of reflexive modernization and the risk society, and Niklas Luhmann’s system theory. All three approaches conceptualize risk differently and make different contributions to the sociological study of risk. The review of these theories is followed by a presentation of five central, partly overlapping and ongoing discussions within the sociology of risk: risk governance, public trust, democracy and risk, the realism–constructivism debate, and governmentality and risk. Finally, the fourth section, based on these current discussions, briefly presents some areas in need of further sociological research.

History

What is Sociology?

The origin of sociological thought can be traced to the end of the eighteenth century in Western Europe (Eriksson 1993). At this time, questions arose about the social order, the division of labor, social hierarchies, social cohesion, and individualization. A concept of society emerged that did not correspond with the sum of its population, but instead was a social phenomenon sui generis, a phenomenon with its own characteristics.

This understanding does not mean that society was external to human beings, but rather was something constitutive of them. The history of human beings and the history of society are two sides of the same coin. This perspective on society emerged with thinkers such as Adam Ferguson (1767), John Millar (1771), and Adam Smith (1776). Whereas Thomas Hobbes (1651) understood man as a “rational wolf”, in need of external pressure from outside (in form of norms, laws, and force) to enable social life, these social thinkers understood human beings and society as interdependent. In their view, society was more than an external environment; it was also something inside us. Our words and deeds were not only individual acts; they were also social products. These thinkers were the predecessors of sociology, and 100 years later, the discipline of sociology first saw the light of day.

The growth of sociology is intertwined with the development of empirical data collection and social statistics (Calhoun et al. 2007, pp. 13–18). In a number of European countries, governments had begun to regularly collect information about their populations. Statistical analysis grew strongly, and the national census—originally a way to keep track of adult males’ availability for military service—emerged as a regular activity of the state, taking its modern form in the nineteenth century. British Parliamentary investigations of industrial conditions provided the empirical basis for Karl Marx’s initial theorizing, empirical data on deaths collected by governments and churches for E´mile Durkheim’s study of suicide, and publicly gathered data for Max Weber’s investigation of German peasants and “junker capitalism”.

Sociology was constituted as an empirically based social science, with an emphasis on the importance of context (social, material, economic, cultural) for understanding social life. It also emphasized that society was a social phenomenon in its own right; it was not just an aggregate of individuals. Social practices and collective understandings were not possible to explain by referring to individual human beings.

Phenomena and activities should be understood and explained in relation to their social contexts. And this applies not only to norms and artifacts, but also to actors and knowledge. Hence, sociology is critical of individualistic explanations, though without rejecting the importance of actors. Individual human beings are always situated in social settings, which have to be considered when explaining their cognitions, feelings, and practices.

Risk in Sociology: From Social Problems to Risk

The relationship between individuals and society has been central to sociological thought since its origin (Giddens 1984). There have been—and still are—many ways for sociology to explore and explain this relationship. Even if everyone agreed that there was no such thing as “pure individuals”—human beings unaffected by society whose thoughts, emotions, and wills developed apart from society—they all emphasized that this relationship was not a harmonious one. People find themselves limited by social positions and cultural belongings and struggle to transform structural barriers and cultural restrictions. At the same time, sociologists emphasized that human beings’ aspirations and ideals come not only from inside, but also from outside—from social norms and ideals that surround them and which they gradually have internalized. They do not develop their own goals, values, and preferences apart from those that exist in society, but instead develop them in relation to these. Thus, the human being is neither a puppet, nor her own master. Social structures and cultural belongings not only serve as barriers to social action, they also enable it.

In classical sociology, social problems and not risks were the focal point. Its classical thinkers—Karl Marx, Max Weber, E´mile Durkheim, and Georg Simmel—were preoccupied with the emergence of modern society, not least the development of industrialization, urbanization, and rationalization and their degrading effects on human beings. Different kind of social problems were put to the fore in the sociological analysis, and different angles were tried in exploring these problems. Risk was not incorporated as a conceptual lens through which these problems were understood and analyzed. Instead, risk research emerged and developed without any relation to sociological thought. The reasons for this were dual: disciplines dealing with risks did not see any relevance of sociological analysis, and sociology did not see risk as a relevant object for sociological research.

Traditional risk concepts have been developed within a framework where risk is technically defined. For technical risk analysis, risk means to anticipate potential harm to human beings, cultural artifacts and ecosystems, to average these events over time and space, and to use relative frequencies (observed or modeled) as a means to specify probabilities (Renn 1998, p. 53). Thus, risk concerns a situation or event in which something that human beings value is at stake and where the outcome is uncertain (Jaeger et al. 2001, p. 17).

This kind of analysis implies that one set of experts establishes the probability and magnitude of the hazards and another set of experts evaluates the costs and benefits of various options. Thereafter, political priorities are invoked in order to make decisions on regulating (forbidding, controlling, permitting) certain risks (Amendola 2001). Science is pivotal in measuring and assessing risks, and therefore experts should guide the risk management processes.

Thus, technical risk analysis is an un-sociological understanding of risk; it does not consider the broader social, cultural, and historical context from which risk as a concept derives its meaning (Lupton 1999, p. 1). In response to this kind of analysis, three different sociological perspectives have emerged, all making different sociological contributions to the field of risk research: the social construction of misperception of risk, the social amplification of risk, and the social construction of risk. With this development, important ideas from classical sociology have gradually been taken advantage of and made to influence risk research.

Sociology Explaining Public Misperceptions of Risk

Risk researchers and risk managers gradually recognized that the public’s perception of risk was different from the view held by the experts. The nuclear researcher Chauncey Starr’s seminal article “Social benefit versus technological risk”, published in Science in 1969, emphasized the importance of considering public acceptability of risk (Starr 1969). He found that risk tolerance was correlated with a number of social components. For example, the public more easily accepted voluntary and familiar risks than involuntary risks (comparing risks associated with the same level of social benefit).

Social and behavioral scientists have devoted themselves to finding out how different groups and individuals perceive risks (Gutteling and Wiegman 1996; Breakwell 2007). Their point of departure—most explicitly within the psychometric school of risk analysis—is that for laypersons risk is a subjective assessment in which contextual factors play an important role. This does not mean that citizens’ reasoning is irrational or haphazard. On the contrary, it is possible to find cognitive patterns and trace causal factors that explain citizens’ risk perceptions. It is found that factors such as novelty (how new a risk is), dread (how feared the risk is), and if the cause of the risk is seen as tampering with nature, are significant factors shaping risk perception (Slovic 1987; Sjöberg 2000). Perceived influence and power are also important factors, as is cultural belonging (Finucane et al. 2000; Zinn and Taylor-Gooby 2006). Also of importance is the social and spatial context in which people make judgments (Lidskog 1996; Wester-Herber 2004). Much research has also been concerned with how different social groups access, interpret, understand, and respond to different forms of information in diverse contexts (Slovic and Peters 1998; Bickerstaff and Walker 2001; Howel et al. 2002).

Thus, there are a number of contextual and social factors that explain why the public does not assess risk in a similar way as the experts. This perspective considers citizens’ perception and understanding of risk, and does not give any attention to the perception of experts and how they understand and measure risks. Instead, the technically defined risk is taken for granted; it is seen as an objective phenomenon in which scientific measurements and statistical calculations give correct, or at least the most valid, knowledge on the character of the risk.

When technically defined risks are not seen as contextually generated, the public’s risk understanding is portrayed as biased or incorrect compared to the experts’ more accurate assessment. The difference between expert and citizen understanding is interpreted as caused by public ignorance or misunderstanding of science (Irwin and Wynne 1996; Levinson and Thomas 1997). By informing—and sometimes even educating—laypeople about the “real risk”, it is believed that the public would correct its judgment and accept risks that experts and regulators have found to be acceptable (Gouldson et al. 2007). According to this view—commonly labeled as the deficit model of public understanding of science (Irwin and Wynne 1996)—knowledge is first produced in a closed circle of scientists, after which it should be disseminated to the public (who in many cases are unable to understand science properly). This view is a variant of the “sociology of error”, which explains what is seen as error and falsehood in science with reference to contextual factors (such as traditions, ideology, conventions, authority, and interests) and what is seen as true and valid knowledge with reference to observations and reasons. The role of sociology, however, is to explain error, not truth (Bloor 1976).

An implication of this perspective is that risk assessment concerns objective analysis aiming to produce factual knowledge about specific risks, while risk communication is about the distribution/transmission of this factual knowledge to the public. To make risk communication effective, it is important to understand how different segments of the public understand risks, and assess the sources of information, to be able to effectively inform them about risks in different circumstances. The sociological task is to provide knowledge about these factors that result in public misperception of risks.

Sociology Explaining Amplifications of Risk

The deficit model draws a sharp line between risk as defined and assessed by experts and as understood by the public. However, many sociologists seek a more sophisticated way to understand the clash between experts’ and the public’s understanding of risk. In the late 1980s, the Social amplification of risk approach was developed (Kasperson et al. 1988; Pidgeon et al. 2003). This is a communication model according to which an original physical event generates a signal that passes different social stations that amplify or attenuate the signal. The model explains why risks evaluated by technical risk analysis as being similar may receive different levels of attention in society at large.

The basic assumption is that a risk event has certain physical characteristics, such as material damage, injuries, and deaths. These characteristics provide an original signal that is then transformed in the communication process. The risk event itself has no meaning, but the social stations of amplification charge it with meanings and messages. The social amplification of risk explains how risks and risk events interact with psychological, social, institutional, and cultural processes in ways that amplify or attenuate risk perceptions and public concern, and thereby shape risk behavior.

It also explains the development of secondary effects of risk events, that is, risks caused by the amplification processes and not by the signal itself (Kasperson et al. 1988). Social amplification of a risk event associates it with meaning that may result in changed policy regulation, new conditions for insurance, consumer boycotts of a product, decreased institutional confidence, and social stigma. Thus, the amplification may result in social or economic consequences that go far beyond the direct consequences of the risk event.

Thus, technical risk analysis cannot provide information about how risks are amplified in society, because understanding and explaining processes of amplification is solely a task for social and behavioral science. The social amplification approach proposes a division of labor in which technical risk analysis is concerned with investigating the original signal whereas social science in general and sociology in particular analyze how this signal is transformed by society.

The social amplification of risk approach is symmetric in the sense that both the attenuation and intensification of the original risk signal are taken into account. Both technical risk analysis and sociological analysis are needed in order to gain knowledge on risk and its consequences for society. The approach contributes an understanding of why certain hazards and events that experts assess as low risk may receive public attention, whereas other hazards that experts consider more severe receive less attention (Kasperson et al. 2003). By encompassing different factors on different levels, it presents a dynamic and multilayered view on how risk understanding develops. It also aims to link the three leading schools of risk analysis—technical risk analysis, psychometric studies of risk perceptions, and sociocultural studies on risk understandings—into a single framework (Pidgeon et al. 2003).

This bridging effort is ultimately based upon a clear division between a physical world of events and a social world of meanings. The amplification process starts with a physical signal, either in the form of an event (e.g., an earthquake) or the recognition of an adverse effect (such as the discovery of climate change). Thereafter, social factors attribute meaning to it. Risk is thereby conceptualized partly as an objective property of a hazard or risk event, and partly as a social construct (Kasperson 1992, p. 158). According to its proponents, this position avoids the two problems of conceptualizing risk in a totally objectivistic or in a relativistic manner (Renn 2008, p. 39).

The approach links different ways to understand and analyze risk. It is, however, a synthesis based on a linear model in which something external to society is channeled through amplifying stations, resulting in different consequences, and where feedback mechanisms and processes of iteration only take place between the social stations of amplification. The hazard itself and experts’ calculation of risk (not least through technical risk analysis) are left outside the approach and are not included in the analysis. In this way, risks—or at least risk events and hazards—are positioned as external to society. Fact finding and sense making are seen as different and discrete spheres of activity, the former populated by technical risk analysts and the latter by various segments of the public. This model does not discuss how the risk (in form of risk events, hazards, or the technical calculation of risk) is constructed, but only how it is amplified. Hence, the bridging ambition also results in a reproduction of the divide between expert and public understandings of risk. Not only risks are amplified in this approach, but also the divide between risk and understandings of risk.

Sociology Explaining Risk

In contrast to sociological studies of the misperception of risk and the amplification of risks, the social construction of risk approach includes the role of science and technical risk analysis as topics to investigate. Its starting point is that all risks are socially constructed in the sense that risks always exist in contexts (Wynne 1992a). This means that technical risk analysis and experts’ assessments of risks have no privileged position; they are only one of many possible ways to frame, define, and understand risks. Thus, knowledge is intimately related to meaning and actors, which means that no kind of knowledge and no kind of actors should be excluded from sociological analysis. Instead, a symmetrical approach is put forward, where all risks—irrespective of how they are assessed and by whom—are seen as socially constructed. Risks, hazards, and risk events are all sociocultural phenomena in their own right, and should not be seen as unproblematic facts that generate specific signals which laypeople then misunderstand or social stations amplify.

This understanding—that also science’s assessment of risk should be seen as construction of risk—has been fuelled by recent developments in society. Science was initially applied to a “given” world of nature, people and society, and scientific skepticism demystified the social and natural worlds. Science’s own claim of rationality was itself spared from the application of scientific skepticism. According to Ulrich Beck (1992), a process of “reflexive scientization” is gradually taking place, whereby scientific skepticism is extended to consider the inherent foundations and external consequences of science itself. This demystification opens up new possibilities for questioning science and technical risk analysis. This extension of the scope of rational skepticism means that no scientific statement is “true” in the old sense of there being an unquestionable, eternal truth, where “to know” means to be certain (cf. also Giddens 1990, p. 40, 1994).

The implications of this perspective—risk as a product of social processes—are far-reaching. Not only does it mean that the task of sociology is to analyze how actors—including science and risk experts—frame, define, understand, and manage risks. It also implies that the separation of risk regulation into distinct areas—risk assessment, risk management, risk communication—is incorrect. Values are not solely invoked in the initial process of defining risks that then should be analyzed, evaluated, and regulated by technical risk analysis. Instead, they are an intrinsic part of the risk regulation process, as in the process of developing and validating knowledge. Thus, even if they are presented as separate spheres, they are not discrete activities ordered in a linear process aiming to regulate risk, but instead are dynamically related to each other. The problem is that technical risk analysis’ definition of risk is preceded by an implicit framing, which is rarely the subject of discussion, either by citizens or by the risk researchers themselves. This framing provides a very restricted understanding of risks and of actors, behaviors, and processes (Wynne 1992a, 2005). When this framing is naturalized—taken as a pre-given way to understand and conceptualize risk—it restricts the role of sociology to investigate and explain why actors’ perceptions and understandings of risk differ from those put forward by technical risk analysts.

The social construction of risk approach has been criticized, not only by technical risk analysts, but also by social scientists. To consider risk as a social construct implies, according to its opponents, a far-reaching relativism where risk bears no relation to a reality beyond human consciousness and cultural values. The social amplification of risk was explicitly developed with the aim of transcending the division between naïve empiricism and far-reaching relativism, as an approach that includes both the need for technical risk analysis and the need for cultural theory (Kasperson et al. 1988). Some researchers, such as Ortwin Renn (2008), argue that it is possible to take advantage of both a more contextual understanding of risk and a more traditional analytical and context-less approach to risk. This is done by defining risk as constituted by both physical/material and social/cultural elements (Kasperson 1992, p. 158; Renn 2008, p. 2). This argumentation rests on the general assumption that it is possible to analytically separate values and evidence, social norms and factual knowledge, deliberation and analysis, but that in practice there is a need for better integration of these analytically distinct entities.

As will be shown below, to understand and analyze risk as a social construct does not necessarily imply a strong relativism, but is based on the assertion that risks are social facts that are irreducible to technical measures. Similarly, its view of knowledge—as contextual, unstable, and sociocultural—does not imply a relativism where all standpoints are given the same cognitive value.

Current Research

A number of sociologists, from somewhat differing standpoints, have emphasised that risk has largely replaced the previous notions of fortune and fate (Beck 1992; Bauman 1993; Luhmann 1993; Giddens 1999). In the past, a lack of certainty was attributed to powers (God, nature, magic) beyond human control, whereas today it is attributed to organizations (such as scientific communities, companies, and nation states). Risk is a factor in human decision making because we cannot gain sufficient knowledge about which possible future will result from our decisions. Furthermore, risk is constituted by the distinction between present reality and future possibilities. Thus, it presupposes that the future is not determined, and that human action shapes the future. As Anthony Giddens (1990, p. 3) puts it:

Modernity is a risk culture… The concept becomes fundamental to the way both lay actors and technical specialists organise the social world. Under conditions of modernity, the future is continually drawn into the present by means of the reflexive organisation of knowledge environments.

However, to reflect on future consequences of human action is nothing new in history. The decisive difference is that in modern societies almost all aspects of social life are included in these reflections and have become objects of decisions and deliberations; hence thinking in terms of risk assessments is a more or less ubiquitous exercise in everyday life (Callon et al. 2009).

In the following, we will present three important sociological contributions to risk research, all of which treat risk as central for society at large. They take into account the broader historical, social, and cultural contexts from within which risk derives its meaning and resonance. Thereafter, we present five thematic areas that are objects of lively discussion in contemporary risk sociology.

Important Theories

Mary Douglas: Purity and Danger

The British anthropologist Mary Douglas (1921–2007), who has inspired many social scientists in the field of risk research, argues in her book Purity and Danger that risks should be understood with reference to the social organization (Douglas 1966). The assessments of risks are responses to problems in the social organization of a specific society, but are also resources for building social order and defending social boundaries. The way risks are viewed reflects the organization of society, including its borders to other societies. What we usually understand as threats coming from outside of society are in fact problems within society.

More specifically, Douglas is interested in how societies assess purity and pollution, which she connects with the overarching concept pair of order and disorder. Purity supports order (both cognitive and social) and pollution is what deviates from and threatens order, and should therefore be condemned. According to Douglas, the separation between purity and pollution, the latter signifying danger, is one of the most fundamental conceptual distinctions in our thinking. This division is, however, relative: “There is no such thing as absolute dirt: it exists in the eye of the beholder” (Douglas 1966, p. 2).

It is important to stress that these definitions are not chosen individually, but in a collective process which is compelling for individuals. Demands for purity are simultaneously requirements for social order, for the survival of society. Douglas argues that we all have norms of order and an associated type of purity to defend, but that these orders and types vary between groups and societies.

In every defense against risks, there is a wish to protect a social order that is considered endangered. Discussions about risks include a desirable norm—a norm of purity—from which the seriousness of the risks can be established. This norm makes it possible to require risk reduction and thereby increases purity. Demands for better risk management imply demands for societal change. A norm of purity contains a vision of a societal order that better corresponds to this norm. Therefore, for sociology, risk should never be seen as something out there, separate from society, but as something produced in and by society.

One implication of this perspective is that our use of the concept of risk reveals who we are. Values and beliefs (including preferences and knowledge)—what Douglas calls cosmologies—are viewed as coherent and endogenously derived systems (cultural biases), generated from specific patterns of social relations (Douglas 1978). Such cosmologies support and legitimate social relations. Actions, organizations, and knowledge interact to generate and legitimate social relations. This interplay should not be understood as a unidirectional causal relation but as a reciprocal interaction in which knowledge and society are co-produced (cf. Jasanoff 2004).

One well-known tool for interpreting and explaining such co-production is the gridgroup typology originally developed by Mary Douglas and her coworkers (Douglas 1982, 1996; cf. Thompson et al. 1990). “Grid” describes the internal structure, how roles and activities are positioned, and “group” the external borders, how the boundary between insiders and outsiders is defined. These two dimensions are fundamental to all cultures, and imply four different cultures: hierarchical, individualistic, egalitarian, and fatalistic.

The hierarchical culture is characterized by a stable and regulated internal social order (high grid). Group membership is strong (high group); it is clear to everyone who is a member and who is not. This culture is characterized by formal procedures, rules, routines, timetables, and trust in authorities. The individualistic culture is the opposite of the hierarchical. Group membership as well as hierarchy are low (low grid and low group). This is a culture of enterprise characterized by uncertainty and change. Decision making is performed with a minimum of formal procedures, and is based on trust in individual competence. The egalitarian culture has a strong boundary to other groups and the outside world (high group). Purity is strived for and outsiders are viewed as threats. The internal differentiation is low (low grid); equality is strived for between positions. The fatalistic culture is a residual culture. It is neither individualistic nor collectivistic, but rather cut off (low group). It includes individuals who do what they are told, though without the protection of either social privileges or individual skills (high grid). Those who govern activities and formulate plans for what will happen are always someone else.

Douglas has used this typology to explain the existence and distribution of different risk perceptions (Douglas and Wildavsky 1982; Douglas 1992). The way individuals and groups react to risks reveals their cultural belongings. Is the reaction about embracing, ignoring, rejecting, or adapting (Douglas 1978)? In an uncertain situation of risk, are the possibilities emphasized or the negative consequences? These questions are relevant for all kinds of issues that people see as risks and dangers; they can concern things like nuclear power and biotechnology, but also EU membership and immigration.

In terms of the grid–group typology, Individualists tend to focus on the possibilities, embracing risks as opportunities to exploit for personal profit. Fatalists do not know how to react and tend to ignore the risks. Egalitarians mobilize resistance in order to reject and eliminate the risks. Hierarchical people try to assimilate and adapt to the risks through regulation and control of risk activities.

Ulrich Beck: The Risk Society and Reflexive Modernization

Ulrich Beck’s (1992) Risk Society : Towards a New Modernity—originally published in German in 1986—is one of the most influential works of social analysis in recent decades. This book is about the reflexive modernization of industrial society. Beck’s underlying thesis is that we are not witnessing the end but the beginning of modernity, a modernity beyond its classical industrial design. This guiding idea is developed from two angles. First Beck focuses on the social transformation from an industrial society with its production of wealth to a risk society with its production of risks and social hazards. The other side comes into view when Beck places the immanent contradictions between modernity and post modernity within the industrial society at the center of discussion. Thus, risk and reflexive modernization are the two—intrinsically interrelated—themes of this book. Beck explicitly states that the aim is to seek “to understand and conceptualize in sociologically inspired and informed thought these insecurities of the contemporary spirit, which it would be both ideologically cynical to deny and dangerous to yield to uncritically” (Beck 1992, p. 10).

Just as modernization in the nineteenth century dissolved the structure of feudal society and produced the industrial society, modernization today is dissolving industrial society and another society is coming into being. This new and coming society is “the risk society”, which is a distinct social formation just as the industrial society was. The risk society differs very clearly from the industrial class society in that it focuses on the environmental question and the distribution of risks instead of the social question and the distribution of wealth. In both types of societies, risks are socialized, that is perceived as a product of political decisions and human action; but in contrast to the risk society, the classical industrial society saw risks as manageable side effects of the production of wealth. These risks were legitimated partly with reference to the production of wealth and partly through society’s development of precautions and compensation systems.

Risk is defined by Beck (1992, p. 21) as “a systematic way of dealing with hazards and insecurities induced and introduced by modernization itself”. The risks and hazards of the risk society are different than in the industrialized society, as they are more widespread and serious. For the first time in history, society involves the political potential for global catastrophes. In the risk society, the relation between wealth production and risk production is reversed. The production of wealth is now overshadowed by the production of risks. The risks produced have lost their delimitations in time and space and consequently can no longer be seen as “latent side effects” afflicting limited localities or groups.

At one level, the distribution of risks adheres to the class pattern, but it does so inversely: wealth accumulates at the top, while risks accumulate at the bottom. Therefore, the risk society could be seen as simply strengthening the class society. However, on another level, this is not true. Today’s diffusion and globalization of risks entails “an end of the other”; that is private escape routes shrink (it is impossible to buy yourself free from risks) as do the possibilities for compensation. Thus, risk positions are no longer pure reflection of class positions, but instead they transform and replace class positions. One example of this is that property (such as forests) today is being devaluated; it is undergoing a creeping “ecological expropriation” which implies the emergence of new conflicts between the different interests of profit and property. This means that the central conflicts in the future will be not between East and West, between communism and capitalism, but between countries, regions, and groups involved in primary and in reflexive modernization, the latter being those that are striving to relativize and reform the project of modernity.

Global risks—mega-hazards, to use Beck’s term—overlap with social, biographical and cultural risks, as well as insecurities. Today, these latter forms of risk have reshaped the inner social structure of industrial society and its fundamental certainties of life: social classes, familial forms, gender status, marriage, parenthood, and occupations. This comprises the other part of Beck’s discussion of reflexive modernization.

The theory of modernization is formulated by Beck as the unleashed process of modernization overrunning and overcoming its own “coordinate system”. This coordinate system has fixed the understanding of the separation of nature and society, the understanding of science and technology, and the cultural reality of social class. It features a stable mapping of the axes between which the life of its people is suspended—family and occupation. It assumes a certain distribution and separation of democratically legitimated politics on the one hand, and the “sub politics” of business, science, and technology on the other.

Today, a social transformation is underway within modernity, in the course of which people will be set free from the social forms of industrial society. This reflexive modernization dissolves the traditional parameters of the industrial society (such as class and gender). This “detraditionalization” occurs in a social surge of individualization, through which a capitalism with individualized social inequality is developing. Here the family is replaced by the individual as the reproductive unit of the social in the life world.

Having discussed this theory of individualization, Beck turns to the role of science and politics in the era of reflexive modernization. He argues that when encountering the conditions of a highly developed democracy and well-established scientization, reflexive modernization leads to an unbinding of science and politics. Earlier monopolies of knowledge and political action are then differentiated.

In discussing science, Beck makes a distinction between primary and reflexive scientization, with the former meaning that science is applied to a “given” world of nature, people and society, and the latter that the scope of scientific skepticism is extended to encompass the inherent foundations and external consequences of science itself. Reflexive scientization thus entails a demystification and demonopolization of scientific knowledge claims. At the same time, the role of science in the risk society is growing. Today’s threats are beyond human perception and experience and it is through science that risks become known. Science thus comprises the “sensory organs” for the perception of today’s risks. Taken together, these two parallel and different developments do not mean that science has come to an end; on the contrary, it pervades all areas of modern life. Today’s science is undergoing a situation of being dethroned similar to that which happened to (institutionalized) religion. In Beck’s secularization model of modern science, the future will bring about a pluralization and a marketization of science.

Beck has been of pivotal importance for sociology, not least by paving the way to make risk a central concern for general sociology. In the wake of Risk Society, he has published a number of books in which he further develops his perspective: Ecological Politics in an Age of Risk (1995), Ecological Enlightenment (1995), and World Risk Society (1999). In his later writings—such as What Is Globalization? (2000), Individualization (2002, with Elisabeth Beck- Gernsheim), Cosmopolitan Vision (2006), Power in the Global Age (2006), The Brave New World of Work (2010), and A God of One’s Own (2010)—Beck puts more emphasis on reflexive modernization and its importance for all aspects of society such as family, work, religion, and global politics. However, in all his books, irrespective of their subject, the main theme is reflexive modernization and the future development of society.

Niklas Luhmann: System Theory and Risk

Drawing on Talcott Parsons’s social theory, the German sociologist Niklas Luhmann (1927–1998) developed a general theory of modern society. The starting point is that there is a fundamental distinction between system and environment and communication is the basic social operation (Luhmann 1984, p. 47). A higher complexity in the environment entails a greater importance for the system to reduce this complexity, otherwise the system will not be operational.

This reduction of complexity is accomplished through functional differentiation, which means that different subsystems develop, each with distinct forms of communication (programs and binary codes). These subsystems are self-referential and auto poetic, which means that their internal orders guide their observations and interpretations and that they are not formed and structured by any external factors.

A subsystem is cognitively open; it is receptive to signals from its environment. At the same time, it is operationally closed. Signals are always transformed into communication through a particular binary code of the subsystem. These codes are abstract and universally applicable distinctions. Science codes a signal in terms of truth/untruth; economy in terms of property/no property; law in terms of legal/illegal; religion in terms of transcendent/immanent; and politics in terms of political power or lack of power and so forth. Luhmann (1989, p. 18) states that:

The system introduces its own distinctions and, with their help, grasps the states and events that appear to it as information. Information is thus a purely system-internal quality. There is no transference of information from the environment into the system. The environment remains what it is.

This does not mean that nothing else exists than social systems and their communicative processes. What it says is that external facts can only be taken into account as part of the system’s environment and only be understood through communication. There is no position available outside the system; these facts can only be understood from within (Luhmann 1993, p. 5). It is, however, possible to observe the border of a system, and this is done through “second-order observation” (Luhmann 1993, p. 223). First-order observations identify facts and objects as givens, and do not reflect on the distinction used in the observation. Second-order observation is an observation of the distinction implicitly used in the first-order observation; it recognizes which distinction is applied in observing a fact or object. Luhmann (1993, p. 227) stresses that second-order observation is not more true or objective than first-order observation. What second-order observation reveals is that there are no objective facts outside the operation of each subsystem. What may seem like an objective fact in the first-order observation (which takes for granted its own distinction), is a product of a particular distinction made in the observation process. Thus, the same event is coded differently by different subsystems.

For instance, the signal from the Tohoku earthquake in Japan, March 11, 2011, that resulted in more then 15,000 deaths and a nuclear disaster at Fukushima nuclear power plant is coded radically differently by different subsystems. The economic subsystem focuses on price mechanisms, economic compensation, and falling prices of shares; the political subsystem on political legitimacy of decisions concerning the location of the nuclear power plants and how the disasters were handled by authorities, but also on the legitimacy of the political representatives that had permitted this activity; the legal subsystems on violations against the given permissions for the plant and the liability of the company as well as political institutions; science on health consequences of radiation exposure for workers and the local population. It is what is communicated that counts. A phenomenon, an event, or an activity can never in itself create a response; it needs to be subject of communication.

But as physical, chemical, or biological facts, they create no social resonance as long as they are not the subject of communication. Fish or humans may die because swimming in the seas and rivers has become unhealthy. The oil pumps may run dry and the average climatic temperature may rise or fall. As long as this is not the subject of communication, it has no social effect. Society is an environmentally sensitive (open) but operatively closed system. Its sole mode of observation is communication. It is limited to communicating meaningfully and regulating this communication through communication (Luhmann 1989, pp. 28–29).

Risk is inherently linked to a functionally differentiated society. In contrast to many other theories, Luhmann does not see risk as a result of detrimental activities or as caused by industrial society. Instead, risk is attributed to decision making that may result in negative consequences. Contingency is a central concept for Luhmann, which means that a situation includes a large number of possibilities. To be able to act, it is necessary to choose among these possibilities, but there is no fundamental point—external authority—that tells what to select among the various alternatives. This has to do with the development of the functionally differentiated society.

In contrast to earlier societies, there is no privileged function system in society, which means that a functionally differentiated society has no center. Each subsystem can only refer to its own communication, and only internally can it refer to its environment. Through internal differentiation, the system develops a richer way to manage the complexity of the environment. At the same time, this differentiation results in a higher internal complexity, with different functional subsystems existing side by side and communicating with their own specific codes and with no external authority. Earlier societies’ external references (such as religion) are replaced by the social system’s self-references in the form of subsystems.

Luhmann defines risk as an attribution of an undesired event or possible future loss. Thus, risk is an intrinsic part of a functionally differentiated society. Decisions have to be made without any certainty about what consequences they will lead to. The cause of the damage could either be attributed to the system itself (risk) or something external to the system (danger) (Luhmann 1993, pp. 101–102). This means that risks concern attribution, which becomes even more clear when Luhmann discusses another distinction, namely, between those who take the decision and those who are exposed to its consequences (Luhmann 1993, pp. 105). Those who take the decision face a risk; whereas those who are victims face a danger, that is, those who perceive themselves as exposed to something that they cannot control. Uncertainty is intrinsic to both risk and danger; the difference lies in who is seen to be a decision maker.

Luhmann (1993, p. 109) stresses that “one man’s risk is another man’s danger” and claims that there is a growing gap between those who participate in decision making and those who are excluded from decision-making processes but have to bear the consequences of these decisions.

Luhmann’s system theory provides an alternative understanding, not only of risk but also of society at large. It sees risk as a matter of attribution and communication, associates it with decision making, sees it as inherently linked with a functionally differentiated society, and strongly emphasizes the distinction between risk and dangers and between decision makers and those affected; and in doing so the theory has both received support and met with criticism (Japp and Kusche 2008, pp. 101–103).

Thematic Areas

There is today an ongoing and lively discussion within the sociology of risk. In what follows, we present five partly overlapping areas of central importance in contemporary risk sociology: risk governance, public trust, democracy and risk, the realism–constructivism debate, and governmentality and risk.

Organizational Risk: From Risk Analysis to Risk Governance

To conceptualize an object as a risk entails seeing it as manageable and governable (Baldwin and Cave 1999; Hood et al. 2001; Hutter 2001; Lidskog et al. 2009). Risk creates space for action as it opens the future for calculation, deliberation, and decision making. In this sense, regulation “enrolls” futures and shapes policy formulations (Wynne 1996).

Risk regulation is not only about how to govern an existing reality, it also concerns the transformation of this reality, for instance, by dealing with novel forms of knowledge that have not yet been put into industrial practice (cf. Stehr 2005). Regulation does not only concern how to regulate an existing activity, but also how novel knowledge should be deployed and employed.

As shown above, there has been far-reaching criticism of technical definitions of risk, not least concerning the difficulty of upholding a sharp separation between an objective measure of risk and a sociocultural understanding of risk (Hilgartner 1992; Rosa 1998; Amendola 2001; Todt 2003). The critique was initially directed at problems within risk analysis, and consequently public perception of risk was seen as a challenge to how risk was defined and approached by technical and calculative means. In the 1990s, however, there was a shift from this internal focus to the broader question of the legitimacy of government. Organizations must ponder not only how to deal with technically defined risks, but also how to deal with actors who may question both the legitimacy of current methods for regulating risk and the trustworthiness of organizations responsible for this regulation. Prompted by several regulatory failures, authorities and companies have started to account for and deal with public opinion and public perceptions of risks, not only to handle criticism but also to forestall it (Lo¨fstedt 2005). Programs for risk communication, public relations, stakeholder dialogue, and public involvement are today integral to both public and corporative governance (Gouldson et al. 2007). Calls for more inclusive and transparent processes, public dialogue, and democratic engagement are widespread in society (Irwin 2006). A heightened concern for stakeholder involvement and public inclusion can be seen as a strategy to influence perception, shape understandings, and produce legitimacy.

Michael Power describes this shift as a move from risk analysis to risk governance. The “governing gaze” has shifted from how risk is defined, analyzed, and calculated to the governance of the organizations that analyze risk (Power 2007, p. 19). Even though the call for a more inclusive risk analysis and risk management may have evoked some response from regulatory agencies, it has not directly led to more inclusive and deliberative risk regulation processes. Rather, the shift from risk analysis to risk governance has increased the awareness of how organizations deal with public opinion and public perceptions as a source of risk in the sense that such perceptions could pose a threat to the legitimacy and stability of existing ways of governing risk (Power 2007, p. 21). This justifies research on how organizations deal not only with technically defined risks but also with the actors they perceive as possible threats and potential risks to the stability of the organization.

Risk regulation does not only concern what is acceptable in terms of how we should mitigate or accept certain risks and hazards, but also rules regarding the process itself and activities that target the understanding of risk and deal with public opinion and perceptions concerning it. Thus, risk governance is not limited to technical calculation of risk, but also includes the evaluation of organizational aspects in regulation of risk.

With a focus on risk governance, that is how uncertainties are organized in order to transform them into governable risk, the questions of who should be involved or excluded in risk regulation processes, on what grounds, and what aspects should be made open and transparent to others, gain greater relevance due to the legitimacy gains and losses such decisions may generate. A heightened concern for public involvement in regulation can thus be seen as “a strategy to govern unruly perceptions and to maintain the production of legitimacy in the face of these perceptions” (Power 2007, p. 21).

Public Trust: The Relation Between Experts and Laypeople

Technical risk analysis builds on a sharp boundary between experts and laypeople. Laypeople do not have access to all the knowledge possessed by experts and therefore draw different conclusions about risks, their ordinariness, magnitudes, and impact. In technical risk analysis, scientific knowledge is the norm and this is what experts have but laypeople lack. This difference is what motivates the concept of lay knowledge, of not being an expert. Focusing on what laypeople lack constitutes the basis for the deficit model mentioned earlier in this essay (Irwin and Wynne 1996). According to this model, the solution is to inform and educate laypeople in order to give them the capacity to gain correct knowledge and thereby arrive at the same conclusions as experts. Risk psychology and risk communication originally developed as academic fields with the aim to understand how laypeople reason about and assess risks in order to learn how to effectively communicate correct knowledge to this group.

The deficit model has been heavily criticized, not least by researchers in the field of science and technology studies (STS) (Wynne 1995; Irwin and Wynne 1996). These scholars argue that the most important problem is not that the public is unaware of research results and scientific facts, but that scientific experts are unaware of and disinterested in lay knowledge and how laypeople assess the situation when decisions are to be taken on complicated risk issues. Consequently, the problem is not that laypeople lack knowledge or lack trust in expertise, but that experts in technical risk analysis do not trust laypeople. Laypeople have the competence to contribute to discussions and decisions on risks, since these concern much more than scientific facts. If grasping scientific details becomes the most important requirement for participation in risk discussions, the relevance of scientific knowledge becomes heavily exaggerated (Irwin and Michael 2003, pp. 22–28). Despite their lack of scientific knowledge, laypeople are competent actors with developed abilities to reflect on what types and sources of knowledge are of relevance to both risk analysis and risk assessment and why some experts should be more trusted than others.

Today, public involvement is often devoted much attention, but there is a tendency to frame this involvement from a technocratic understanding based on the deficit model (Irwin 2006; Lidskog 2008). In this way, broadened participation gives experts further possibilities to inform the public with the aim of winning acceptance for already proposed decisions. This instrumental ambition can be found in every participatory project, because there are always groups who strive for a specific outcome of the process. Studies have found that when laypeople are not considered competent to influence the decisions—when they are taught instead of listened to—the result is often one of alienation rather than engagement amongst the public (Wynne 2001).

Problems arise when such an overconfident and self-sufficient expert culture tries to communicate the benefits of a risk project. This culture is not interested in reflecting on its own shortcomings, and criticisms from laypeople are understood as based on their not understanding what is best for them (Wynne 2001, p. 447). If expert cultures wish to increase their legitimacy and appear as trustworthy to the public, they should instead be less confident about their own results and open to acknowledging their own limitations.

It is, however, not only uncertainties to which attention should be given attention, but also ignorance. Scientific knowledge is strongly specialized with a narrow focus, which implies that complexities are reduced and alternatives are actively deleted. In order to increase the trustworthiness of scientific knowledge, it is important to make visible the conditions of scientific knowledge production and risk assessment. The implication here is that scientific knowledge alone is not enough when deciding about complicated risk issues. It must therefore be enriched by other types of knowledge, as well as by other perspectives in order to give a more complete and nuanced view of the risks at stake.

The alternative to the deficit model and a technocratic framing of risks is to include a broader understanding of participatory processes, one which acknowledges that other actors than scientific ones can contribute knowledge on risk and therefore should be given possibilities to influence the regulation of risks (Funtowicz and Ravetz 1990; Lidskog and Sundqvist 2011). Hence, questions concerning who formulates the issue, sets the agenda, and exercises power become important. However, this does not mean to replace blind trust in experts with blind trust in laypeople. To draw clear boundaries between experts and lay people and grant one of these priority over the other is not the right way to proceed. Instead it means that the public always has important contributions to make in technical discussions on risk issues. These contributions should never be evaluated from the deficit model for the reason that they are not about scientific facts, but about how to assess the relevance, trustworthiness, and ignorance of scientific facts. This kind of public competence, which emphasizes the contextual dimension of science, can always enrich scientific knowledge (Wynne 1993, p. 328).

The public can also contribute knowledge and insights about what they are worried about (Marres 2007; Sundqvist and Elam 2010; Lidskog 2011). This knowledge as well as the assessments made by members of the public are anchored in their livelihoods. Experts and authorities are often completely unaware of the reasons for ordinary people’s worries, and it is therefore of great importance to involve concerned groups in decision processes in order to include relevant experiences. Experts and decision makers need to improve their awareness of how worries are the driving force of public engagement and that scientific knowledge rarely is an adequate response to these worries.

Risk and Democracy: The Importance of Framing

How risks are defined is a central topic for sociology to study. The reason for this is that it determines what groups and what competences are considered relevant for taking a stand and making decisions (Lidskog et al. 2011). Sociology contributes to risk analysis by showing how definitions of risks shape social relations and distribute powers to groups, at the same time as other groups are excluded from decision making. In a risk context, a scientific definition of the issue at stake is often assumed. However, such a restriction may lead to reductionism, giving experts too much power, while rendering other important factors invisible. Laypeople are reduced to passive receivers of information who only can contribute their trust and consent regarding expert proposals (Wynne 1992b, 2001).

Sociological studies of expert work do not conclude that risk issues should be handled without experts. What is claimed is that experts alone should not define, investigate, and give answers to risk issues. Risks are too complicated to be delegated to experts. Sociological studies of the social dynamics in definition of risks are valuable for making risk management more relevant and robust. Not least to show how definitions and processes of framing are made and what consequences these processes have for different groups’ possibilities to participate in and influence risk management.

Regulating a risk is not only about setting limits, but also about framing the risk as such, deciding what actors are of relevance and should be included in the decision process, what roles, mandates and responsibilities are given to them, and finally and most importantly what to make decisions about (Lidskog et al. 2009). Sociologists and other social scientists have critically scrutinized framing processes in public decision making, and a key finding is that frames concerning technical issues are usually dominated and influenced by experts in a technocratic way (Wynne 2001, 2005). By narrowing the issue, scientific experts exaggerate the scope, power, and importance of scientific knowledge in the public domain, neglecting cultural factors and ignoring citizen competence (Wynne 1992b, 2001; Jasanoff 2010). Paradoxically, science is often accorded the most prominent role even when public dialogue is striven for. A reason for this is that many issues are technically framed, with questions of risk, safety, and effectiveness placed at the center (Wynne 2005, 2010). Scientific expertise is needed to answer such questions, since experts have the resources and competence to know the “true” nature of the issue at stake.

The result is that what is presented as a democratic risk decision process is often a technocratically framed process based on a scientific definition of the risk. The public are invited to participate in a process that is often presented as being about dialogue but the possibility to influence the process according to their own perspectives is often unclear, and in practice very restricted. These processes do not open up decision making to wider evaluation and influence, but instead function to gain legitimacy and acceptance for already defined—and many times in practice already decided—expert-based proposals.

A technocratic framing reduces the role of citizens to one of trusting or distrusting experts; to saying yes or no to already decided proposals and to being restricted to only discussing the local and concrete aspects of a project. Instead, they should be provided with opportunities to define and frame the project in their own way, putting forward what risks they see as relevant and worthy of attention. Sociologists have argued that the discussion should not only include the meaning of the project and its risk from the perspective of the experts and regulators, but also from the perspective of the public and other stakeholders (Gieryn 1999; Hilgartner 2000; Irwin and Michael 2003; Jasanoff 2005; Wynne 2005).

Since there is no correct framing of risk issues, risk management will always be surrounded by conflicts. Different groups frame problems differently and give them different priorities. Sometimes it is the case that what one group considers to be a solution to a problem, another group considers to be part of the problem. Some suggest that nuclear power is a sustainable energy solution, because it does not lead to carbon emissions, while others argue that the radioactive waste makes it anything but a sustainable and secure long-term source of energy. In this situation, the option of denying the existence of different frames is a dead end. Instead, the first step toward a robust solution is to acknowledge the existing frames and welcome different groups to contribute their own perspectives, using their own frames, knowledge, and values. Frequently, this opening up of the framing is met with criticism and opposition from those groups that are already handling the issue from a particular frame. They have invested time, money, and prestige in the project and are therefore reluctant to change the established framing of the issue and its particular way of handling it. But taking public involvement seriously entails a more democratic framing of issues, in the sense that issues have to be connected to public concerns (Marres 2007). If we are to find a way beyond pendulum swings between technocracy and populism—that either scientists or laypeople should decide—various groups of frames must meet on a more equal footing.

Produced Risk: Beyond Realism and Constructivism

An ongoing controversy in risk research is between realism and social constructivism. Do risks possess physical characteristics that exist independently of cultural and social contexts, including actors’ perceptions, or are they socially and culturally constructed attributes, produced and shaped by these contexts? Technical risk analysis is based on realism, and sees risks as independent of their context. As described earlier in this essay, many social scientists and sociologists accept technical risk analysis and its realism, but add studies on why the public accepts some risk analyses and rejects others. The public’s risk assessments are understood as social constructs, whereas experts’ risk assessments are seen as realistic descriptions. But among social scientists, we also find those who question the realistic approach and consider it wrong. Instead, they want to go beyond this dichotomy in order to find a middle ground between realism and social constructivism. Many scholars claim that there is a third way between “naïve realism”, and “idealism” (Renn 2008), positivistic and constructivist paradigms (Rosa 1998), and “pure realism” and “radical constructivism” (Zinn 2008). The first implies the existence of an empirical and objective reality outside human perception, and the latter a subjective and cultural understanding shaped by humans and with no necessary connection to an objective reality.

However, the quest to find a middle way, or third way, between subjective and objective reality—between something internal and something external to human beings and society—reproduces what it tries to transcend. The first is based on causal laws of material reality, and the second, on a social world of opinions and norms. This point of departure is questioned by certain sociologists, claiming that the focus should instead be on the dynamic interplay between different factors that make up reality (Irwin and Michael 2003; Latour 1993, 2004, 2005). Reality is neither reducible to something out there, beyond human action, nor reducible to something in there, to human thoughts and actions. Instead it is co-produced by many factors.

However, social constructivism has been and still is an important tradition within the sociology of risk. Its historical roots go back to classical sociology, not least the work of E´mile Durkheim (1858–1917). Durkheim elaborated a unique domain for sociology by demarcating a social reality totally different from that of biology and psychology. “The social”—or social facts, as he called it—is a reality in its own right, irreducible to other levels of reality (Durkheim 1982). The task of sociology was to explain social facts, and these explanations should not include any findings or factors from the psychological (individual) level or the biological level. The result was a specialization and division of labor among academic disciplines, where every discipline has its own domain and unique explanations. This understanding of sociology entailed that everything that exists outside the social domain was disregarded. The social domain was considered autonomous with regard to other domains.

The implication was that when analyzing risk, sociology should study people’s interpretations and experiences of these risks, and how these are bound to social structures that steer perceptions and actions. Material objects and technical artifacts were left outside the analysis, and seen as not having any power to influence what is taking place in the social domain. Experiences of risks should not be explained with reference to nature or artifacts, but only social factors. For example, when sociology explains people’s worries about nuclear waste, the focus should not be on the strength of the canisters as a technical barrier to protect the biosphere from radioactivity, but on people’s opinions about these barriers and how these influence their assessment of radioactive risks. Thus, the objects of sociological analysis are perceptions, interpretations, and socializations to social patterns of risk attitudes toward radioactivity and disposal of nuclear waste.

This sociological purification of a social dimension has been successful in so far as it has created a distinct niche for sociological thought and provided important knowledge concerning how people perceive, understand, and act upon risks. Nevertheless, its strong separation between nature and society, with sociology only investigating the latter, is problematic. This strong focus on the social dimension has led to a paradoxical understanding of nature and artifacts, which Bruno Latour (2004, p. 33) has aptly described as follows:

Those who are proud of being social scientists because they are not naive enough to believe in the existence of an “immediate access” to nature always recognize that there is the human history of nature on the one hand, and on the other, the natural nonhistory of nature, made up of electrons, particles, raw, causal, objective things, completely indifferent to the first list.

The consequence of social constructivism is that we, on the one hand, find a society with a history and on the other a nature without history. Latour is critical of this kind of approach, which leaves important aspects of reality outside sociological analysis. According to him, the task of sociology is to transcend both realism and social constructivism, a task that necessarily entails that dichotomies—such as those between nature and culture, social and technical, actor and structure, science and society—be critically studied and not taken for granted. How and why these dichotomies are produced and reproduced should also be explained.

Latour’s proposal for transcending the dichotomy between realism and social constructivism is to focus on the production of risk. Risks are produced by practices, by actors using instruments and technologies. It is therefore misleading as a sociologist to focus on perceptions, opinions, and experience. Instead, the focal point for sociology should be to explore how risks are produced, by what means, and with what effects. The focus on practices means that there is no “real risk” behind our perceptions and actions. There are no risks separate from actors and society, possible to observe by actors. Instead, there are a number of actors and activities where nature, technology, and culture interact, resulting in the production of risks. There are no risks beyond socially produced risks, that is, beyond the measuring and monitoring of risk. Through these practices not only knowledge about risks is produced but also the risks as such. Thus, practices are performative; they not only describe reality but also shape it. By studying these practices, sociology can transcend the dichotomy between realism and social constructivism.

Governmentality: Toward an Individualized Risk Management

Ulrich Beck emphasizes that the current society is increasingly individualized, in the sense that individuals are seen as being responsible creators of their own lives and are therefore constantly required to make their own decisions. “The choosing, deciding, shaping human being who aspires to be the author of his or her own life, the creator of an individual identity, is the central character of our time”, as Beck (2002, p. 23) puts it.

This individualization, however, does not necessarily mean the achievement of greater personal freedom. Beck grasps this development with the term “institutionalized individualism” (Beck and Beck-Gernsheim 2002). At the same time, as nation-states have outsourced many of their functions and operations, there is an insourcing of functions to the individual level. What the nation-state, the employer, the union, or the family once provided is now presented as being the responsibility of the individual. Thus, individualization in this sense does not mean freedom of choice, but instead the compulsion to choose in a situation where no certainties exist. It is a “precarious freedom” centered on imperatives such as think, calculate, plan, adjust, negotiate, define, and revoke (Beck 2002). But even though we often lack knowledge of what choices are best, it is demanded of us to make individual decisions and be responsible for the consequences.

There is a tension between institutionalized individualism and the risk society thesis about mega-hazards beyond human control. According to Beck (2008), the government of incalculable risks and mega-hazards leads to the irony of putting an end to the free liberal society in the ambition of protecting citizens from risks. At the same time, individuals are continuously ascribed responsibility for risks that are impossible for them to manage.

However, a certain strand of sociological thought, following Michel Foucault’s (1991) work on governmentality, cultivates a perspective that takes neither individualism nor the character of risks and the risk society for granted. Instead they argue that risks should be conceptualized and understood as a way of steering practice. The task of sociology is to study how, through technical apparatus and administrative institutions, incalculable dangers are made into knowable and governable risks. Risks become a way of ordering reality and making it calculable, and expert knowledge is decisive in this (Rose 1993; Dean 1999).

Instead of making use of coercive power, the government can steer through norms, knowledge, and individual self-discipline. The reason for this is that we have today an “advanced liberal society”,’ based on a clear division between the state and the civil society (Rose 1996, 1999). Civil society has emerged as an autonomous sphere in which individuals can express themselves as free citizens. In protecting this autonomy, coercive means of governmental control are precluded, which means that more sophisticated instruments and mechanisms need to be developed, technologies for governing at a distance (Rose and Miller 1992).

This way of exercise power, in which those who are controlled feel autonomous, is based on tools for the self-development of those who are governed. The responsibility is placed on citizens to govern themselves, to act upon themselves, and be responsible “for the security of their property and their persons, and that of their families” (Rose 1999, p. 247). An almost paradoxical relationship is created between the state and the civil society, in which the exercise of power is conducted with the goal of not being visible. It is characterized more by bringing citizens to perform a regulated freedom than by imposing on them coercive measures (Rose and Miller 1992, p. 174).

The strong emphasis on individuals as being responsible governors of their own lives creates dilemmas. Increasingly, individuals have to face and make decisions on a range of issues characterized by uncertainty. An example of this is how genetic risks are governed with the aim to improve the quality of the population. During the development of the welfare state, it became an important task for the government and the public administration to guide, control, and intervene in the reproduction of the population, but today these decisions are delegated to individual citizens. The problem is no longer framed as improving the quality of the population but as a question of individual lifestyles. Today, reproduction is about promoting the self-governance of the client (Novas and Rose 2000). The responsibility to govern genetic risks—to decide about having children and informing others about one’s own genetic risks—has been made into a lifestyle choice. However, plenty of experts are willing to give advice on how to make your own lifestyle possible, and guide your choice in certain directions.

Risks thereby constitute a strategy for disciplinary power to monitor and govern individuals and thereby whole populations (O’Malley 2008). Those individuals that deviate from what is presented as normal behavior are seen as “at risk”, and need to be controlled with the aim of achieving behavioral modification. This control is primarily that of self-management, with individuals being urged to protect themselves from certain risks (Giddens 1991). Risks are thereby de-socialized, privatized, and individualized; they become a responsibility of the individual, and a way for government to govern the conduct of individuals. The sociology of risk should therefore be devoted to studying questions about how problems are defined, by whom and in relation to what goals, and through which practices, technologies and rationalities this governing is accomplished and authority exercised.

Further Research

As emphasized in the introduction, the specific contribution of sociology of risk is to place risk in its social context. There are no risks “out there” in the sense of being independent of the society in which they emerge, are measured and monitored. Society is differentiated, which means that cognitions, understandings, and feelings of risks are differentiated. Actors—including scientific ones—have various structural positions and cultural belongings and therefore understand risks differently. To develop sociological knowledge on risks implies to contextualize risks; to associate them with specific actors, institutions, and settings. This means that no conceptualization, regulation, or research on risks is beyond sociological exploration; and furthermore, scientific definitions of risks and technical risk analysis should be proper study objects for sociological investigation.

This does not imply a reductionism and relativism, seeing different actors’ understandings of risks as all that exists. On the contrary, risks should be understood as produced through social activities where nature, technology, and culture interact. Sociology of risk should not be restricted to investigating risk perceptions, but should also study definitions and usage of risks, including how different actors deal with risky nature and unruly technologies; how these are framed and regulated, and as a consequence of these activities, produced.

The five thematic areas described above have by no means been given a final answer, but are in need of further research. As already emphasized, these areas are interrelated; organizational aspects of governing risks, public inclusion in risk regulation, framing and production of risks, and the monitoring of individuals’ risk behavior are interconnected. As with many other disciplines, sociology consists of different theoretical traditions, methodological assumptions, and analytical approaches. Therefore, it will never be able to give simple, single, and final answers to complex issues, and the sociology of risk is no exception of this. It is, however, able to gain knowledge on important topics, and while not producing final knowledge at least it may produce more and better knowledge—theoretically informed and empirically sensitive—on the function and place of risks in different social settings.

Studying processes of risk assessment and risk management, the sociology of risk could make important contributions to preparing and realizing a political and democratic discussion on risk issues, controversial as well as uncontroversial. By identifying and clarifying the political aspects of these objects, making frames and framing processes visible, and showing how technologies and political devices are embedded in social processes, it opens up risk regulatory processes for public scrutiny and evaluation. What may originally be framed as technical issues, only relevant for a specialized group of experts, will thereby become relevant for citizens. Risk regulation is about more than just choosing the best regulatory instruments and finding the best technical solutions to predefined risks. It concerns building society and choosing a future.