1 The danger of losing situation awareness

Does it help if we agree that our concepts, like situation awareness, are operational rather than representational? De Winter (2014) suggests that we can all get along and keep doing our research if we see them as operational. It eradicates the need for any of the hermeneutic self-flagellations that we have seen pop-up in this journal and elsewhere (Xiao and Vicente 2000; Angell and Straub 1999; Dekker and Hollnagel 2004). Operationalism almost reduces our concepts to mere mathematics. And mathematics, as Wittgenstein said, only needs to be consistent with itself. It doesn’t have to represent (or ultimately cannot even be proven to represent) anything in the real world.

But of course it does. Take a recent proposal of a model that links complacency to attentional bias and a loss of situation awareness (Parasuraman and Manzey 2010). John Flach warned against the indelible circularity of such models some 15 years before:

Why did you lose situation awareness?

Because you were complacent.

How do we know you were complacent?

Because you lost situation awareness.

Complacency has elegantly and logically been shown to be, in a word, nonsense (Moray and Inagaki 2000), and a “loss of situation awareness” is analytically nothing more than a post hoc judgment that says we know more about the situation now than other people apparently did back then (Dekker 2013). Yet even that kind of peer critique doesn’t keep researchers from proving, for instance, that a loss of situation awareness causes more incidents when an airline captain is at the controls than when the first officer is (Jentsch et al. 1999). Despite the original protestations that accompanied the introduction of “situation awareness” to the human factors lexicon (Billings 1996; Flach 1995; Sarter and Woods 1991), situation awareness is regarded as a causal construct that exists in the mind of a human operator after all (see Flach 1995).

With such scientific legitimation, we can hardly blame lay people from seeing such a construct as a “convenient explanation that [they] easily grasp and embrace” (Flach 1995). To them (like many researchers), these constructs reflect an important empirical reality. They are deeply representational. Loss of situation awareness has become the favored cause for mishaps in aviation and other settings. Eighty-five percent of reports produced by the Australian Transportation Safety Bureau in 1996 contained references to a “loss of situation awareness” (ATSB 1996). And investigators at the US National Transportation Safety Board have combined complacency and situation awareness in all their circularity more than once. For example, it allowed them to “explain” why a crew took off from the wrong runway at Lexington, resulting in the deaths of 49 people including the captain. (NTSB 2007). Apparently our concepts are representational enough for them to blame the dead or the living.

I learned recently of a criminal court case against an operator who, in the words of the prosecution (the crown in this case), had “lost situation awareness” and had therefore been criminally negligent in causing an accident that killed two people. In another case, the coroner who investigated a friendly fire incident that killed three British soldiers in Afghanistan in 2007, rendered the verdict that the crew of an American fighter jet had lost “situational awareness” and were looking at the wrong village when they dropped the bomb (Bruxelles 2010).

This is no longer just an “operational” use of situation awareness. It is a deeply representational use—and representing a lot more than a causal agent in the mind. Situation awareness, in these cases, represents a duty of care, the deontological commitment expected of practitioners whose actions can influence the lives of others. When people demonstrate the loss of such situation awareness (which is very easy), it represents an absence of a duty of care; a breach of the fiduciary relationship with patients, passengers, colleagues, collateral; a failure to live up to the deontological commitment. It represents a possibly prosecutable crime. This makes situation awareness representationally rich beyond our wildest dreams, yet leaves it operationally entirely impoverished. Your loss of situation awareness is merely the difference between what you knew then versus what I know now. Which is also what you should have known, but you didn’t because you were negligent. Or are you negligent because you didn’t know? Ah, it doesn’t matter, there will surely be a way to deem you guilty anyway.

Other people can always show that there was more in the world than there was in the mind, because in hindsight anybody can show that. And they can then call that difference the practitioner’s “loss of situation awareness.” Our research and our literature has legitimated (or done nothing to inhibit) such representational bastardization. It might even aid and abet it. Consider the very first sentence of a recent article that attempts to introduce one particular model of situation awareness to anesthesia (Schulz et al. 2013): “Accurate situation awareness (SA) of medical staff is integral for providing optimal performance during the treatment of patients.” Just imagine the following exchange that may show up in medical liability, medical indemnity or even criminal negligence cases:

Q. Wouldn’t you agree, doctor, that accurate situation awareness by medical staff like yourself is integral for providing optimal performance during the treatment of patients? This is what the leading journal in your specialty claims. See, here it says so [counsel points to exhibit].

A. Uh, I’d have to agree.

Q. Would you say, doctor, that your performance in this case, in which your patient died as a result of the care you provided, was optimal?

A. Uh, we all hoped for a different outcome.

Q. Were you, or were you not aware of the situation that this particular drug X, when used in combination with Y and Z, had produced problems for this patient 18 years before, when she was living in another State?

A. I was not aware of that at the time, no.

Q. Yet you agreed that accurate situation awareness is integral for providing optimal performance during the treatment of patients?

A. … [silence].

Q. No further questions.

I would like to see colleagues who champion the construct, help defend the practitioner who is accused (implicitly or explicitly) of losing situation awareness. I do not know whether they, or anybody, can. Constructs such as situation awareness lock human factors into a hopelessly old-fashioned dualist ontology where there is a world and a mind, and the mind is merely the (imperfect) mirror of the world. If we urge people to be less complacent, to try a little harder, then that mirror can become a little less imperfect. The inverse is true too. If people turn out to have an imperfect mental mirror of the world (a loss of situation awareness), we know that because the outcome of their actions was bad—and in hindsight we can easily point to the exact few critical elements that were missing from their mental picture. We, or others, can then blame their deficient motivation (their complacency, their violation of the duty of care, their breach of the fiduciary relationship) for this imperfection.

Saying that these constructs are closer to operationalism than representationalism is head-in-the-sand, hide-in-the-ivory-tower apologetics. It is a run for moral cover. What we need to ask, as Dietrich Bonhoeffer once suggested, is whether this creates a kind of world that we ourselves still want to live in. As a member of the human factors community, I do not want to contribute to a world where our constructs are going to be used against the very people on whose side our field was born. For I believe that human factors research has pretty much always been on the side of the human operator. It has tried to explain performance problems not by reference to behavioral or motivational shortcomings but to systematic relationships to the design of the equipment that people are made to work with. Our own use, and other people’s use, of supposedly neutered, operational constructs such as complacency and situation awareness may help undo that very legacy.

This goes for any of the constructs we bring into being. Our words matter. Our words have consequences. Our words help conjure up worlds for other people—people with legal battles to win, people with prosecutorial ambitions to satisfy, people with insurance payouts to reap and people with design liability to deny. These are worlds beyond the safe, seemingly objective operationalization in laboratories populated with undergraduate student subjects. These are worlds where our words attain representational powers that go way beyond the innocuous operationalism we might have intended for them. These are worlds in which real people—professional practitioners—are put in harm’s way by what we come up with. We cannot just walk away from that.