1 Human error: trend and countertrend

1.1 From componentialism to systems explanations

One of CSE’s driving forces has been an understanding that success and breakdown in safety-critical systems should not be attributed to component breakage or human failures, but rather has to be seen as connected to the complexity and dynamics of the activity itself. This includes the organizations that manage the activity, that regulate it, that use it, and that fund it. This has sometimes been characterized as the “new look” (Cook et al. 1998) or “new view” of human factors and system safety (Dekker 2005), where second and/or more analytic stories about organizational functioning supplant less empirical conclusions regarding “human error” in incidents and injury. The second view already has produced a considerable research base on how complex systems fail (Woods and Cook 2002).

The build-up of this research base coincides with some fundamental shifts in Western science over the last 30–40 years (Wallerstein 1996). Newtonian-Cartesian precepts, especially the ideas of causality and the decomposition of complex systems into constituent elemental parts, have been long dominant. This has led us to think about human action in terms of linear sequence(s) of causes and effects and to seek “root causes” in the malfunctioning or breakage of component parts. For years, this has offered investigations analytical leverage because it seemingly matched the way the world works. The potential of this becoming a relatively simplistic form of social physics has been countered by CSE and broader trends in science, which point to a world that is not linear, where emergent properties make system-level behavior difficult to predict on the basis of component behavior, and where there is no Newtonian symmetry between cause and effect. Rather, infinitesimally small changes in starting conditions can lead to very large changes later on. It is now orthodoxy in CSE that spectacular accidents do not require spectacular causes but rather emerge from the everyday, normal work processes embedded in layers of systems designed to manage multiple contradictory goals simultaneously (e.g., production, safety, punctuality, service, competitive advantage, regulatory compliance).

There are now safety and accident investigations acting on these insights, extending beyond the incident point, going back into history, and into the organizations tasked with running the safety-critical operation, and the society surrounding it. For example, one airline accident in Canada first attributed to pilot error was later seen to be linked to failure(s) in the country’s entire aviation system. As another example, Valeri Legasov, Chief Investigator of the nuclear accident at the Chernobyl reactor in 1986 “drew the unequivocal conclusion that the Chernobyl accident was··· the summit of all the incorrect running of the economy which had been going on in our country for many years” and that it was “impossible to find a single culprit” (pre-suicide tapes 1988, in Mould 2000). Similarly, 15 years later, the Space Shuttle Columbia Accident Investigation Board argued that “the causal roots of the accident can be traced, in part, to the turbulent post-Cold War policy environment in which NASA functioned during most of the years between the destruction of Challenger and Columbia.” (CAIB 2003, p. 99).

The result is that it may be up to society, not experts, to decide whether to embrace complex, tightly coupled technologies such as nuclear power generation that carry an inherent potential for accidents. In short, society now is seen as having to bear the responsibility for the success or failure of its technologies (e.g. Perrow 1984). Conclusions like this of course leave open the question which social actors and institutions can be called upon to make such decisions and to address questions of responsibility. There is a certain irony here because “leaving the system open” with respect to such questions (where to locate “cause,” and responsibility in particular) may have a series of unintended consequences that those who have argued for a systems approach to error may not, or could not, have foreseen.

1.2 Back to componentialism: the judicial movement against human error

While systems investigations based on the precepts of CSE may have become more legitimate in professional and practitioner worlds, there is a countertrend expressed by the increase in judicial action in the wake of accidents. There are no fatal or serious accidents anymore in, for example, aviation that do not attract prosecutorial interest and indeed lead to the conviction on criminal charges of the front-line people involved, for example, fatal aircraft collision in Milan (Learmount and Modola 2004), a near-miss in Amsterdam (Ruitenberg 2002), the 2000 Concord accident at Paris (Cowell 2008), or recent maintenance-related airliner accidents in Greece (Hadjicostis 2009) and Spain (Govan 2008). As another example, medical errors brought to court as putative manslaughter have increased by up to 200% since the 1970s in the United Kingdom (Ferner and McDowell 2006).

Even some researchers have asked whether CSE has gone too far in pushing for diffuse, multi-factor systems explanations (Reason 1999; Gawande 2002; Shorrock et al. 2004). There are doubtlessly a number of bases for the resistance against diffuse explanations for failure. These have probably not received the attention they deserve in the literature. In the meantime, some practitioners of safety-critical work are reacting to the apparent trend toward componential explanations and criminalization in fields like aviation and medicine (Sibbald 2001; Horns and Loper 2002; McNeill and Walton 2002). Such responses often go beyond protest to actually refusing to cooperate in investigative or regulatory inquiries (e.g., Schmidt 2009).

One of them seems directly related to the new view of accident and tragedy. In brief, given how the West tends to think about individuals and responsibility, it should be no surprise that moral accountancy, when “suppressed” in one domain, will emerge perhaps with even more force in another. Cognitive systems engineering, while having argued for a more systemic view of accidents, has not built into its models a number of significant elements related to how authority and responsibility habitually are thought about or “worked out” in the West in regard to error. Whether this is because these elements were never seen, or not thought to be important or relevant is an issue to be taken up at another time. Certainly, all models have their limitations. The irony here is that the emphasis on system, systemic properties, and causal “reach” (deeper into history and society) may be one reason why important, even causal, elements related to error have been ignored in the new view. Below we outline what some of these elements might be.

2 Free will and the regulative ideal of moral thinking in the west

2.1 Sin and a rational choice to err

Through its focus on context and how it imposes constraints on human–system interaction, cognitive systems engineering has shifted focus away from issues of individual responsibility, de-emphasizing anything that suggests automony or free will. While folk assumptions about the exercise and availability of free will on the part of practitioners in situ often dominate post hoc analyses of error, CSE argues that explanations like these do not take into account the conditions under which people work. For example, in CSE, resource limitations and uncertainty are believed to severely constrain the choices open to individual actors. Van den Hoven (2001) called this “the pressure condition.” Operators such as pilots and physicians and air-traffic controllers are perceived as “narrowly embedded,” i.e., not able to exercise free will or automony; they are “configured in an environment and assigned a place which will provide them with observational or derived knowledge of relevant facts and states of affairs” (p. 3). Such environments are seen as hostile to the kinds of reflection and action that the CSE literature equates (but hardly specifies) with individual moral responsibility. Still, it is in this reading of individual right, responsibility (morality) and its consequences that one finds the legitimating myths of Western moral thinking, like the story of Adam and Eve. This account, and its treatment by subsequent theologians (and how it has informed Western society), has made this link between rational choice and error (sin) pervasive in Western thinking (note the effort and years it has taken psychology to depart from rational choice models of moral action). The particular links we make in the West between choice, morality and error has gone largely unchallenged because it matches what we see as common sense. In other words, it forms the basis of, what might be called, our folk theodicy.

The story of Adam’s original sin forms an important basis for this theodicy, and especially what Augustine made of it, even if this is seldom recognized or acknowledged. It frames the kinds of proscriptive and retrospective negotiations that we have internally and with others on behalf of people carrying out safety-critical work in real conditions. Eve had a deliberative conversation with the serpent on whether to sin or not to sin, on whether to err or not to err. The allegory emphasizes the conscious presence of cues and incentives to not err that are invoked in accounts of failure nowadays, as well as warnings to follow rules and to avoid sin and error. Nevertheless, Adam and Eve elected to err anyway. This is a very Western story that specifies a specific kind of relationship between error, choice, and consequence. In the Judeo-Christian tradition, it assumes that individuals had the necessary intellect, had received the appropriate indoctrination (do not eat that fruit), had the capacity for reflective judgment, and actually had the time to choose between a right and a wrong alternative. Adam and Eve then proceeded to pick the wrong alternative, a choice that made a difference in their lives and the lives of many subsequent others. The story of original sin portrays how we think about self, error, and responsibility and how we probably have thought about it for ages. The idea of free will permeates not just our folk theories about morality but also our scientific models. The result is that the story of Adam and Eve continues to influence how we think about and assess human performance to this day.

It is important to note that the legacy of this theodicy is peculiar to the West. Few of the other creation stories we know of place as much emphasis as the story of Adam and Eve does on free will. J, the author of the story, portrayed the serpent with quite human contours, a human alter ego as it were, and consciousness. Freedom of action, without coercion or constraint, is necessary for moral responsibility (even if it may not be sufficient). That is why it may have been so important for J to cast the serpent anthropomorphically. We do not find here a Satan, who came up from Hell (none of those concepts existed in J’s time), to arm-twist the gullible into buying into his agenda. No, all J said was that the serpent was crafty. But so was Eve. The serpent began by inquiring “Is it true that God has forbidden you to eat from any tree in the garden?” No, Eve explained to the snake. The fruit of the tree in the middle of the garden was not even to be touched, never mind the eating part. She had made that part up, as God’s prohibition was only against eating, not touching. But, she argued, if they would touch it, they would die. Nonsense, the snake replied, and after further debate, Eve relented. The result was something that neither Eve nor the Snake could have foreseen: Adam and Eve acquired something close to Divine insight into the moral conditions of the world.

Augustine’s third century interpretation of J’s story became Roman Catholic doctrine, affecting our culture and everyone in it, Christian or not, to this day. Suffering, said Augustine, is the result of human faults. Augustine recognized that people would rather feel guilty than helpless, so they would be willing to accept a version of J’s story in which human suffering occurs solely because of human fault or failure This appealed to very human need to feel ourselves in control, even at the cost of guilt (Pagels 1988). Augustine’s version of the second creation story also played brilliantly on the human tendency to accept and allocate responsibility for suffering. This is a response to the fundamental need to find meaning, to find a reason, to find something to hold onto, when confronted with seemingly random, inexplicable pain or tragedy. While we could partly blame our ancestors Adam and Eve for original sin, said Augustine, it also made death something unnatural. Keep in mind that before the fall, Adam and Eve were immortal. Thus, death and mortality was all their fault: death and tragedy can, even should, be traced to a specific act by a specific person. Consider how much this matches not only Western judicial tradition, but also the anthropological observation that Western culture does not differ much from so-called primitive ones in that we both refuse to accept the notion of ‘‘natural death’’ (see Douglas 1992). What Augustine assured us is that while death and suffering may be unnatural, it was not random. The meaning of suffering and tragedy should be sought in moral choice(s), Augustine said. (Of course, in another story from the Old Testament, Job reminds us is that after the Fall what constitutes right or moral action may look very different to man or God).

2.2 Sin and individualism

From a Christian perspective, and Protestantism further emphasized this, sin is seen as individual act—one that is the product of deliberate, even rational, choice. In the Western intellectual tradition, it has seemed self-evident to see ourselves and evaluate ourselves as individuals, bordered by the limits of our minds and bodies. It also seemed self-evident that we should be evaluated in terms of our own personal achievements (or failures to achieve). From the Renaissance onward, the individual became a central focus of intellectual discourse, fueled in part by Descartes’ psychology which further validated in the West a notion of person as “self-contained individuals” (Heft 2001). Cognitive systems engineering has attempted to mitigate this common sense notion of the individual as contained (which is still taken for granted by those in human factors who equate human thought and action with individual information processing). Cognitive systems engineering has done this by constructing the individual as but one actor in a system of agents, both human and machine, whose interactions and interconnections collectively determine performance outcomes. In brief, CSE has taken the system, not the individual, as the unit of interest and analysis. Yet the rampant, Horatio Alger ideal of individualism developed in North America in the late 19th and early 20th centuries has made any attempt to think in terms of systems and multiple actors that much more difficult. It has meant that any attempt to explain tragedy as a multi-causal, multi actor event almost inevitably turns into a search for a single villain. To make an argument that it takes teamwork, or an entire organization, or an entire industry to break a system is one that seems counterfactual given how the West tends to define both self and responsibility.

3 Anxiety and witch hunting

3.1 Fear of uncertainty

Within the Western folk model of causality, when it comes to accidents, the problem is reduced to identifying the responsible individual. It also drives a search for explanations for why it was this individual and not someone else. The Western moral and scientific systems that inform how causality and responsibility are defined and apportioned, default to the individual. This sets the stage for another set of epistemological dilemmas—when no single cause or explanation (sometimes known as Nietszchean anxiety) can be identified. For example, this is how Snook (2000) described how he attempted to analyze the friendly shoot-down of two U.S. Black Hawk helicopters by U.S. Fighter Jets over Northern Iraq in 1993:

This journey played with my emotions. When I first examined the data, I went in puzzled, angry, and disappointed—puzzled how two highly trained Air Force pilots could make such a deadly mistake; angry at how an entire crew of AWACS controllers could sit by and watch a tragedy develop without taking action; and disappointed at how dysfunctional Task Force OPC must have been to have not better integrated helicopters into its air operations. Each time I went in hot and suspicious. Each time I came out sympathetic and unnerved··· If no one did anything wrong; if there were no unexplainable surprises at any level of analysis; if nothing was abnormal from a behavioral and organizational perspective; then what have we learned? (p. 203)

Snook (2000) confronts the question of whether any kind of moral control of complex systems is possible at all. What does it mean if we can find no wrongdoing, no surprises, no kind of deviance? If everything was normal, then how could the system fail? This challenges not just our attempts to do science, but also our moral certitude about how the world works. As such, “rudderless” events can lead to almost any kind of fabulation, even conspiracy theory, in attempt to fill in this epistemological space. Investigations that do not turn up a “Eureka part,” the label given for this in the TWA800 probe, are feared not because they are bad investigations, but because what they imply provokes anxiety. As Nietzsche pointed out, the need for finding a cause is fundamental to human nature. Not being able to find a cause is profoundly distressing because it implies a loss of certainty and control. As such, it challenges how we think the world ought to work. So what do we do if there is no Eureka part, no nucleus no one individual who is “at fault”.? Is it possible to acknowledge that failure can result from normal people doing business as usual in normal organizations? Not even many accident investigations succeed at this. As Galison (2000) noted:

If there is no seed, if the bramble of cause, agency, and procedure does not issue from a fault nucleus, but is rather unstably perched between scales, between human and non-human, and between protocol and judgment, then the world is a more disordered and dangerous place. Accident reports, and much of the history we write, struggle, incompletely and unstably, to hold that nightmare at bay. (p. 32)

Galison reminds us that this fear (this nightmare) comes from not feeling in control of (and not being able to understand) the systems we design, build, and operate. The failures which emerge from normal everyday systems interactions question what “normal” is. It is this threat to our epistemological and moral accountancy that makes accidents of this kind so problematic. We would much rather see failure as something that can be traced back to a single source, a single individual. When this is not possible in the assignation of blame and responsibility, accuracy or fairness matters less than closing or reducing the anxiety associated with having no cause at all. In the Western scheme of things, being afraid is worse than being wrong, being fair is less important than being appeased. Finding a scapegoat is a small price to pay to maintain the illusion that we actually know how a world full of risk works (many of our own or society’s making, if one accepts Perrow’s (1984) argument).

3.2 Regression to componentialism and theodicy

Criminalizing the accidental is one way to restore moral and epistemological certitude to the world. Judicial responses to failure (the excision of a single culprit) often seem to follow a script taken from a medieval morality play. For example, today’s jural accounts of failure test out hypotheses of moral assertion (Erikson 1966; White 1974; Rock 1998). They also enact a kind of substitutive anxiety (Wilkinson 2001). In other words, fears of uncertainty, social change, and moral decline may drive the development (and acceptance) of simple causal models just because they reinforce and validate moral boundaries, reaffirm commonly held beliefs and views of the world, and allay people’s fears of the uncontrollable (Bailey 1994; Thomas 2007).

The increase in judicial action in the wake of accidents and the calls within the human factor community for a return to componential explanations for failure (Reason 1999; Shorrock et al. 2004) can be seen as kind of intellectual and moral regression to earlier periods in European history. Most witch hunts (and the most violent of them) did not occur in the Dark Ages but during the Renaissance, a period of enlightenment and growth of scientific knowledge (Rotenberg 2003). In Sweden, for example, the largest witch trials took place from 1668 to 1676, two centuries after the birth of Copernicus, more than 30 years after the death of Galileo and around the time many reputable universities were founded in Europe. In the town of Rättvik, seventy people were put on trial for witchcraft in 1671, when more than five hundred children gave testimony about nightly airborne trips to the devil’s abode (Lennersand 2008).

The idea of “identifying and removing the unfit” reified and expressed in the Calvinistic notion of predestination has been judicially legitimated as solution to deviance in Western societies since the seventeenth century (Rotenberg 2003), and echoes of these beliefs can be found in many of today’s proposals for managing the risk to complex systems today. This all rests, of course, on the belief that witchcraft or, in today’s phraseology, misfortune has to “stand” for something else (see Rock 1998). In other words, arriving at an explanation of witchcraft or misfortune involves interpretive work of a particular kind that natives or insiders cannot carry out entirely by themselves. In both cases, “explanation” takes the same form. This is essentially the symbolic and rhetorical transformation of particular actor(s) into agents and their actions which are defined as morally or epistemologically suspect.

What Durkheim added to this (and most of the theories of social order that inform human factors research have been influenced by him) was a kind of ontological confidence. In other words, if one follows Durkheim, making statements about an individual’s action in the world was of the same order of difficulty, no more, no less, as making statements about events in the natural world. What Durkheim added to the West’s moral discourse about accidents, responsibility, and the individual was the belief that these relationships were relatively easy not only to map out, but to predict. It is in this way that local events in accidents and tragedy, informed by a Western reading of what constitutes morality and responsibility, can be merged with a putatively more universal and “scientific” notions and theories of cause and effect. However, neither the victims nor investigators are probably aware this occurs: the existential drive for finding a “reason” for bad things that happen, just merges into (and informs) the “scientific” or technical search for a cause.

4 Conclusion

The regression toward componential explanations for failure in both the judicial and the academic community reminds us that safety work does not just reflect technical or scientific considerations. It is not, in other words, just grounded in a discussion of what constitutes cause and effect—however, tightly or loosely, this mechanism may be defined at the time. Rather, safety work is informed by but also a formative part of the Western moral enterprise which focuses on responsibility, choice and error, something that is derived inevitably from Christian and especially Protestant perspectives.

The existence of this ideological substratum, as Foucault might have termed it, is taken for granted and seldom acknowledged—let alone challenged. This helps explain why the regression to the individual and his/her moral character is so hard to get rid of, despite CSE’s work over the past decades. The ideology of error, in other words, is alive and well because key elements of Western culture, theology and theodicy all help to hold it up and reaffirm it. An ideology offers social actors resources for reproducing a particular social and moral order, and in turn providing the rhetorical basis for keeping it alive. As Smith points out, an ideology often supports a set of “interested procedures:” methods of not knowing or not recognizing what is embedded in the legitimated institutions so that they themselves continue to be reproduced. The ideology of individual moral choice and responsibility not only animates judicial action and componential procedures premised on finding “human errors”. It also reaffirms these procedures, the institutions that support them and the central premises of the ideology itself.