1 Situation awareness

Situation awareness is being discussed again (Dekker, this issue). What is the point? To the casual observer there is surely nothing left to say? The discipline has a dominant account of the concept, one in widespread use within academia and industry, moreover, one that is simple, tractable and appears to make intuitive sense. Dekker’s paper tells us, once again, that behind the scenes situation awareness might not be as fully resolved, or even as fundamentally useful, as we might think (Flach 1995; Patrick and Morgan 2010a, b; Salmon et al. 2008a; Salmon and Stanton 2013). The main problem for Dekker (this issue) seems to be the frequency with which situation awareness is cited as a direct causal factor in the aftermath of adverse events. The paper asks whether we should be worried when we encounter terms such as ‘loss of situation awareness’, ‘lack of situation awareness’, ‘degraded situation awareness’ or ‘poor situation awareness’ in accident investigation reports, or when it is put forward as a key causal factor in military friendly fire incidents (Rafferty et al. 2013), aircraft crashes (e.g. Jones and Endsley 1996), crashes between cars and trains at rail level crossings (e.g. Salmon et al. 2013) and maritime incidents (e.g. Grech et al. 2002). The answer, according to Dekker, is yes.

For well over half a century, we have travelled away from simply ‘blaming the operators’ to instead considering a set of more systemic factors. From Heinrich (1931) to Rasmussen (1997) and beyond, the system boundaries have continually expanded in order to permit a richer critique of how events ‘emerge’ from interacting factors, and without having to take the system apart in order to reach this understanding. In current usage, however, ‘loss of situation awareness’ could be a regressive term. If it does not place responsibility squarely on the shoulders of individuals who ‘lost’ situation awareness then it certainly focuses attention on the human rather than the system as a whole. If one considers situation awareness to be something that resides exclusively in the heads of individuals then it is quite literally ‘on the shoulders’ of people, and more often than not those at the front line, such as pilots, air traffic controllers and drivers rather than CEOs, entire organisations or emergent properties that have no convenient structural category to fall into (Walker et al. 2009).

Dekker (this issue) discusses several important and potentially disturbing moral and ethical issues associated with the use of the concept in this way. His thought-provoking article, and others like it (Flach 1995), brings into question whether the concept is even useful at all. This article is a response. We are of the view that situation awareness ‘is’ a useful concept. In many respects, we agree almost entirely with Dekker’s analysis as it is described; however, the underlying issue is that situation awareness has become so closely wedded to a particular theory that people make the mistake of seeing it as one and the same thing. It is not. Dekker’s paper presents an excellent analysis and argument but offers no solution. We put forward our work in distributed situation awareness (DSA) as a candidate. This short article reflects DSA off the key issues Dekker sets out in his paper so that researchers and practitioners in this realm can judge for themselves.

2 The component view versus the systems-level view

Dekker’s article centres on the notion that it is wrong to examine broken components in the aftermath of accidents, and therefore attributing blame to human operators for ‘losing situation awareness’ is also wrong (Dekker 2011; this issue). Safety and accidents are shaped by the decisions of all actors, not just the front line workers in isolation, and accidents are caused by multiple contributing factors, not just one bad decision or action (Rasmussen 1997). They are emergent properties arising from nonlinear interactions between multiple components across complex sociotechnical systems (e.g. Leveson 2004). This ‘systems view’ is not new. It is widely accepted; indeed, it resides at the core of the discipline of human factors itself and has done ever since its modern origins (Trist and Bamforth 1951). It is hardly surprising, therefore, that when a model of situation awareness which foregrounds the role of individual cognition is put in contact with highly systemic problems that Dekker feels serious ethical and moral problems arise. The differences between alternative models of situation awareness are highly contentious (Endsley In Press). On the one hand, Endsley (In Press) puts forward seven fallacies which it is felt other models fall into and promulgate through the literature, causing confusion. On the other hand is the much simpler idea that due to the complexity of sociotechnical systems that form the subject of much contemporary analysis, the study of information processing in the mind of individuals has lost relevance (Hollnagel 1993). Endsley and colleagues have made unquestionably good progress on numerous thorny psychological issues around their model, but we contend there is a much more elegant solution: instead of looking at the information processing of a person embedded in a situation, look instead at the interactions or transactions that take place between actors: from nodes to links—this is the main essence of DSA.

2.1 Situation awareness is a systems phenomenon, not an individual operator one

In addition to Dekker’s viewpoint (this issue), recurring arguments against situation awareness and its measurement include that it is not possible to accurately describe the awareness held by somebody else, nor is it meaningful to examine the mind independently from the world (Dekker 2010, 2013; Hutchins 1995; James 1890), and finally, as researchers and experimenters, we simply cannot know exactly what it is that other people know as it exists within their own heads. All of these arguments, however, become moot when situation awareness is considered as a systems phenomenon; that is, situation awareness resides within the overall system and not solely in the individuals undertaking work (Salmon et al. 2009; Stanton et al. 2006).

This viewpoint allows situation awareness to be approached from a different perspective. Rather than trying and understanding the ‘component’ humans in the system by analysing their individual cognition, requiring ever more experimental complexity and effort to do so, DSA bypasses this by focussing on the interactions and ‘transactions’ between them. The powerful criticism that practitioners cannot see inside people’s heads is thus avoided simply because there is no need to. By focussing on these ‘transactions’, which as we know from (Hutchins 1995) pioneering work in distributed cognition are between humans and artefacts as well as other humans, it is possible to build a network comprising concepts and the relationships between them. These concepts are founded on ‘transactional memory’, which discovered the reliance on people to have other people (Wegner 1986) and machines remember for them (Sparrow et al. 2011). We extend this notion to people and objects in networks. This ‘situation awareness network’ represents the system’s DSA, and through further interrogation, it is possible to determine who in the system has access to what knowledge at different points in time (e.g. Stanton et al. 2006). The outputs can be illuminating. Being completely naturalistic, with no task interruptions or other significant experimental artifices, the analysis is strongly data driven, shedding light not on what people should do, but on what they actually do (Salmon et al. 2008b; Stanton et al. 2006; Walker et al. 2010); on affordances embedded in the system (Walker et al. 2013); and on how to design a system so the required information/awareness is in the places it needs to be (Stanton et al. 2009). The results have been successfully translated into numerous domains and welcomed by researchers who have not been able to make progress with existing approaches (Bourbousson et al. 2011; Fioratou et al. 2010; Golightly et al. 2010, 2013; Macquet et al. 2014; Patrick and Morgan 2010a, b and others).

2.2 It is the system that ‘loses’ situation awareness, not individual operators

Let us return to Dekker’s moral and ethical dilemmas and the tendency for the dominant dialogue in situation awareness to foreground the role of people who lose situation awareness (rather than the system). The dominant dialogue is emblematic of Dekker’s ‘broken component’ view of accident causation (Dekker 2011). How often are we told that a pilot was not aware of the plane’s altitude, that a truck driver wasn’t aware a train was approaching the rail level crossing, or that a battle group weren’t aware that a group of their own men were located on the other side of the canal? In such circumstances ‘loss of situation awareness’ by human operators has to be the primary cause. Doesn’t it?

No. The DSA viewpoint is in total agreement with Dekker on this point. Loss of situation awareness by any individual cannot possibly be labelled as the cause of an accident. Not because loss of situation awareness doesn’t happen, rather, it is not useful. The system loses situation awareness and not the individuals working within it. DSA is held by the system, it is built through interactions between components, both human and non-human, and therefore it is an emergent property. In most meaningful contexts, situation awareness is not something that can be held by one individual alone and therefore cannot be lost by one individual alone. Let us zoom out to the bigger picture. Let us say a person did lose awareness, and let us further say we will implement some form of countermeasure. Problem solved? Perhaps not. The strategic issue we are faced with in many safety critical domains is a persistent class of accident which continues to bounce back ‘despite’ well-intentioned countermeasures like these (Walker et al. 2009; EASA 2010; CAA 2011; RSSB 2009). It was by working at precisely this interface that the DSA concept emerged (Stanton et al. 2006) and has made progress ever since.

2.3 Broken components versus broken systems

The systems-level view demands a different approach to accident investigation. When loss of situation awareness seems to have played a role in an adverse event, the accident investigator needs to examine the overall system to determine the why, not the who. Why is it that the pilot, the truck driver, the commander were not aware of something important? When ‘loss of situation awareness’ takes place, it is not appropriate to begin with the individual and try to expand outwards. Rather, a DSA approach is required, whereby one starts with the system and focuses inwards (if necessary). The recent Kerang rail level crossing tragedy that occurred in Victoria, Australia, provides a compelling example. Here, the driver of a loaded semi-trailer truck continued towards a rail level crossing apparently unaware that a passenger train was also approaching. The resulting collision killed 11 train passengers and injured a further 15 people, including the truck driver.

Following an exhaustive investigation, the Office of the Chief Investigator concluded that the train and train crew, the truck, the road and rail infrastructure, and the rail level crossing warning devices all played no causal role in the incident (OCI 2007). They commented that, ‘for reasons not determined the truck driver did not respond in an adequate time and manner to the level crossing warning devices’ (OCI, p. 72). In short, his lack of ‘situation awareness’ regarding the approaching train was focussed on. He was subsequently prosecuted on the basis that he had failed to keep a proper lookout. After pleading not guilty to eleven counts of culpable driving causing death and eight counts of negligently causing serious injury, he was acquitted by a jury.

The broken component approach criticised by Dekker (this issue) could not fully explain the incident. More worryingly, it did not fully account for the role of other decision makers and non-human components (e.g. warnings, risk assessment tools, incident-reporting systems). Worst of all, it did not produce design recommendations that would improve performance and safety. Instead, the rail level crossing was subsequently modified to include boom gates, light-emitting diode (LED) lights, rumble strips and active advanced warning signs. This component fix response suggests strongly that the rail industry felt that the truck driver was broken; the appropriate response was therefore to try and ‘fix’ him with yet more warning devices. Experience tells us, however, that this component fix approach does not prevent accidents from continuing to bounce back and haunt us (e.g. RSSB 2009), and it is precisely these types of accidents that are becoming of strategic concern and for which a systemic approach is required.

In the example above, it turns out the system was indeed broken (and more than likely still is). Examination of the investigation report through a DSA lens revealed that various interactions enabled the loss of situation awareness (Salmon et al. 2013). For example, there was a lack of communication across the system regarding previous near-miss incidents at the rail level crossing. The risk assessment tool provided an assessment that did not raise sufficient concern regarding safety at the crossing; this in turn ensured the crossing would not be upgraded to full boom barriers for some time, and budgetary constraints limiting crossing upgrades exacerbated this. Physical features at the crossing obscured the approaching train. The loss of situation awareness was the systems fault, not the truck driver’s. Having driven the same route as part of his job once a week for seven years without experiencing an approaching train, the truck driver checked the flashing lights and didn’t perceive them to be flashing. How can the truck driver be blamed when the onus was on the system to support his tasks in that context, at that time, with those factors present? The transaction wasn’t achieved. The system had lost situation awareness, not the truck driver, and the implications flowing from this extend far beyond simple engineering countermeasures (Salmon et al. 2013). This supports Dekker’s vision that human factors practitioners will defend the accused. The DSA approach sits squarely on the side of the human operator; the individual cognition view could end up as the human operators’ enemy and prosecutor. We can, however, see that the idea of situation awareness residing within individuals makes the system designers and system operators feel safe. Any problem, simply remove the faulty component operator and carry on with business as usual. To take the systems view would raise fundamental questions about system design and operation—questions that would have profound implications and require considerable redesign and changes in operation.

2.4 The future

How do Dekker’s key issues look now? Dekker criticises the circularity of complacency, attentional bias and loss of situation awareness as follows:

  • Why did you lose situation awareness?

  • Because you were complacent.

  • How do we know you were complacent?

  • Because you lost situation awareness.

From the systems-level viewpoint, this circularity is removed and replaced by the following:

  • Why did the system lose situation awareness?

  • Because it’s an emergent property of sociotechnical systems.

  • How do we know its emergent?

  • Because it’s not reducible down to the level of ‘individual cognition’.

How does Dekker’s hypothetical scenario in which a doctor is blamed for a patient’s death because they lost situation awareness look when compared with DSA?

Q. Wouldn’t you agree, doctor, that accurate situation awareness by the system in which you work is integral for providing optimal performance during the treatment of patients? This is what the leading journal in your specialty claims. See, here it says so [counsel points to exhibit].

A. Uh, I’d have to agree.

Q. Would you say, doctor, that the performance of the system in this case, in which your patient died as a result of the care you provided, was optimal?

A. Uh, we all hoped for a different outcome.

Q. Were you, or were you not aware of the situation that this particular drug X, when used in combination with Y and Z, had produced problems for this patient eighteen years before, when she was living in another State?

A. I was not aware of that at the time, no; however, the system may have been, but there was no transaction of awareness between components. I mean, the information was held ‘somewhere’ in the system, but it didn’t get to me. The system failed.

Q. Yet you agreed that accurate situation awareness is integral for providing optimal performance during the treatment of patients?

A. Yes I do, but situation awareness doesn’t reside exclusively in my or my surgical team’s heads, it is distributed around the system. The system as a whole is designed to get that information to me but it didn’t.

Q. Thank you. In that case I have no option but to call the next fourteen witnesses all together, hereafter referred to as ‘the system’ (the designer, the trainer, the computer programmer, the chief executive, the nurses, the anaesthetist, etc.) and as exhibits we would like to present all of the computer systems and documents associated with the patient’s notes and medication. The jury is asked to consider how this system of people and technology could lead to an outcome like this, and what lessons we can learn for the future.

3 Conclusion

Situation awareness is a key concept for safety science. Dekker’s thought-provoking article (this issue) is quite correct in criticising the component level view of situation awareness and the danger associated with identifying ‘loss of situation awareness’ by human operators. As he points out, human factors and safety research have always been on the side of the human operator and the inappropriate use of terms such as loss of situation awareness in accident investigations threaten this (Dekker 2013; this issue). We agree. DSA as a systems-level solution to situation awareness (Salmon et al. 2009; Stanton et al. 2006) has been put forward to show that practitioners can still continue to champion the situation awareness construct, but to champion it as a systems-level phenomenon, not a component level one. This will keep the discipline where it needs, and wants, to be: as an advocate for humans in systems and as a solution to the real underlying issues.